Sample records for multistage finite-time optimization

  1. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  2. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  3. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  4. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  5. Simulation of 3-D viscous compressible flow in multistage turbomachinery by finite element methods

    NASA Astrophysics Data System (ADS)

    Sleiman, Mohamad

    1999-11-01

    The flow in a multistage turbomachinery blade row is compressible, viscous, and unsteady. Complex flow features such as boundary layers, wake migration from upstream blade rows, shocks, tip leakage jets, and vortices interact together as the flow convects through the stages. These interactions contribute significantly to the aerodynamic losses of the system and degrade the performance of the machine. The unsteadiness also leads to blade vibration and a shortening of its life. It is therefore difficult to optimize the design of a blade row, whether aerodynamically or structurally, in isolation, without accounting for the effects of the upstream and downstream rows. The effects of axial spacing, blade count, clocking (relative position of follow-up rotors with respect to wakes shed by upstream ones), and levels of unsteadiness may have a significance on performance and durability. In this Thesis, finite element formulations for the simulation of multistage turbomachinery are presented in terms of the Reynolds-averaged Navier-Stokes equations for three-dimensional steady or unsteady, viscous, compressible, turbulent flows. Three methodologies are presented and compared. First, a steady multistage analysis using a a-mixing- plane model has been implemented and has been validated against engine data. For axial machines, it has been found that the mixing plane simulation methods match very well the experimental data. However, the results for a centrifugal stage, consisting of an impeller followed by a vane diffuser of equal pitch, show flagrant inconsistency with engine performance data, indicating that the mixing plane method has been found to be inappropriate for centrifugal machines. Following these findings, a more complete unsteady multistage model has been devised for a configuration with equal number of rotor and stator blades (equal pitches). Non-matching grids are used at the rotor-stator interface and an implicit interpolation procedure devised to ensure continuity of fluxes across. This permits the rotor and stator equations to be solved in a fully- coupled manner, allowing larger time steps in attaining a time-periodic solution. This equal pitch approach has been validated on the complex geometry of a centrifugal stage. Finally, for a stage configuration with unequal pitches, the time-inclined method, developed by Giles (1991) for 2-D viscous compressible flow, has been extended to 3-D and formulated in terms of the physical solution vector U, rather than Q, a non-physical one. The method has been evaluated for unsteady flow through a rotor blade passage of the power turbine of a turboprop.

  6. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  7. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  8. A multistage time-stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1985-01-01

    A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.

  9. Numerical solutions of 2-D multi-stage rotor/stator unsteady flow interactions

    NASA Astrophysics Data System (ADS)

    Yang, R.-J.; Lin, S.-J.

    1991-01-01

    The Rai method of single-stage rotor/stator flow interaction is extended to handle multistage configurations. In this study, a two-dimensional Navier-Stokes multi-zone approach was used to investigate unsteady flow interactions within two multistage axial turbines. The governing equations are solved by an iterative, factored, implicit finite-difference, upwind algorithm. Numerical accuracy is checked by investigating the effect of time step size, the effect of subiteration in the Newton-Raphson technique, and the effect of full viscous versus thin-layer approximation. Computer results compared well with experimental data. Unsteady flow interactions, wake cutting, and the associated evolution of vortical entities are discussed.

  10. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  11. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  12. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  13. How quantitative measures unravel design principles in multi-stage phosphorylation cascades.

    PubMed

    Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf

    2008-09-07

    We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.

  14. Novel methodology for wide-ranged multistage morphing waverider based on conical theory

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Liu, Jun; Ding, Feng; Xia, Zhixun

    2017-11-01

    This study proposes the wide-ranged multistage morphing waverider design method. The flow field structure and aerodynamic characteristics of multistage waveriders are also analyzed. In this method, the multistage waverider is generated in the same conical flowfield, which contains a free-stream surface and different compression-stream surfaces. The obtained results show that the introduction of the multistage waverider design method can solve the problem of aerodynamic performance deterioration in the off-design state and allow the vehicle to always maintain the optimal flight state. The multistage waverider design method, combined with transfiguration flight strategy, can lead to greater design flexibility and the optimization of hypersonic wide-ranged waverider vehicles.

  15. Accurate solutions for transonic viscous flow over finite wings

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.

    1986-01-01

    An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.

  16. Multi-stage responsive 4D printed smart structure through varying geometric thickness of shape memory polymer

    NASA Astrophysics Data System (ADS)

    Teoh, Joanne Ee Mei; Zhao, Yue; An, Jia; Chua, Chee Kai; Liu, Yong

    2017-12-01

    Shape memory polymers (SMPs) have gained a presence in additive manufacturing due to their role in 4D printing. They can be printed either in multi-materials for multi-stage shape recovery or in a single material for single-stage shape recovery. When printed in multi-materials, material or material-based design is used as a controlling factor for multi-stage shape recovery. However, when printed in a single material, it is difficult to design multi-stage shape recovery due to the lack of a controlling factor. In this research, we explore the use of geometric thickness as a controlling factor to design smart structures possessing multi-stage shape recovery using a single SMP. L-shaped hinges with a thickness ranging from 0.3-2 mm were designed and printed in four different SMPs. The effect of thickness on SMP’s response time was examined via both experiment and finite element analysis using Ansys transient thermal simulation. A method was developed to accurately measure the response time in millisecond resolution. Temperature distribution and heat transfer in specimens during thermal activation were also simulated and discussed. Finally, a spiral square and an artificial flower consisting of a single SMP were designed and printed with appropriate thickness variation for the demonstration of a controlled multi-stage shape recovery. Experimental results indicated that smart structures printed using single material with controlled thickness parameters are able to achieve controlled shape recovery characteristics similar to those printed with multiple materials and uniform geometric thickness. Hence, the geometric parameter can be used to increase the degree of freedom in designing future smart structures possessing complex shape recovery characteristics.

  17. A genetic technique for planning a control sequence to navigate the state space with a quasi-minimum-cost output trajectory for a non-linear multi-dimnensional system

    NASA Technical Reports Server (NTRS)

    Hein, C.; Meystel, A.

    1994-01-01

    There are many multi-stage optimization problems that are not easily solved through any known direct method when the stages are coupled. For instance, we have investigated the problem of planning a vehicle's control sequence to negotiate obstacles and reach a goal in minimum time. The vehicle has a known mass, and the controlling forces have finite limits. We have developed a technique that finds admissible control trajectories which tend to minimize the vehicle's transit time through the obstacle field. The immediate applications is that of a space robot which must rapidly traverse around 2-or-3 dimensional structures via application of a rotating thruster or non-rotating on-off for such vehicles is located at the Marshall Space Flight Center in Huntsville Alabama. However, it appears that the development method is applicable to a general set of optimization problems in which the cost function and the multi-dimensional multi-state system can be any nonlinear functions, which are continuous in the operating regions. Other applications included the planning of optimal navigation pathways through a transversability graph; the planning of control input for under-water maneuvering vehicles which have complex control state-space relationships; the planning of control sequences for milling and manufacturing robots; the planning of control and trajectories for automated delivery vehicles; and the optimization and athletic training in slalom sports.

  18. Simulation of multi-stage nonlinear bone remodeling induced by fixed partial dentures of different configurations: a comparative clinical and numerical study.

    PubMed

    Liao, Zhipeng; Yoda, Nobuhiro; Chen, Junning; Zheng, Keke; Sasaki, Keiichi; Swain, Michael V; Li, Qing

    2017-04-01

    This paper aimed to develop a clinically validated bone remodeling algorithm by integrating bone's dynamic properties in a multi-stage fashion based on a four-year clinical follow-up of implant treatment. The configurational effects of fixed partial dentures (FPDs) were explored using a multi-stage remodeling rule. Three-dimensional real-time occlusal loads during maximum voluntary clenching were measured with a piezoelectric force transducer and were incorporated into a computerized tomography-based finite element mandibular model. Virtual X-ray images were generated based on simulation and statistically correlated with clinical data using linear regressions. The strain energy density-driven remodeling parameters were regulated over the time frame considered. A linear single-stage bone remodeling algorithm, with a single set of constant remodeling parameters, was found to poorly fit with clinical data through linear regression (low [Formula: see text] and R), whereas a time-dependent multi-stage algorithm better simulated the remodeling process (high [Formula: see text] and R) against the clinical results. The three-implant-supported and distally cantilevered FPDs presented noticeable and continuous bone apposition, mainly adjacent to the cervical and apical regions. The bridged and mesially cantilevered FPDs showed bone resorption or no visible bone formation in some areas. Time-dependent variation of bone remodeling parameters is recommended to better correlate remodeling simulation with clinical follow-up. The position of FPD pontics plays a critical role in mechanobiological functionality and bone remodeling. Caution should be exercised when selecting the cantilever FPD due to the risk of overloading bone resorption.

  19. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.

  1. Multistage Fuzzy Decision Making in Bilateral Negotiation with Finite Termination Times

    NASA Astrophysics Data System (ADS)

    Richter, Jan; Kowalczyk, Ryszard; Klusch, Matthias

    In this paper we model the negotiation process as a multistage fuzzy decision problem where the agents preferences are represented by a fuzzy goal and fuzzy constraints. The opponent is represented by a fuzzy Markov decision process in the form of offer-response patterns which enables utilization of limited and uncertain information, e.g. the characteristics of the concession behaviour. We show that we can obtain adaptive negotiation strategies by only using the negotiation threads of two past cases to create and update the fuzzy transition matrix. The experimental evaluation demonstrates that our approach is adaptive towards different negotiation behaviours and that the fuzzy representation of the preferences and the transition matrix allows for application in many scenarios where the available information, preferences and constraints are soft or imprecise.

  2. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  3. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  4. Microfiltration of thin stillage: Process simulation and economic analyses

    USDA-ARS?s Scientific Manuscript database

    In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...

  5. Optimization of the propulsion for multistage solid rocket motor launchers

    NASA Astrophysics Data System (ADS)

    Calabro, M.; Dufour, A.; Macaire, A.

    2002-02-01

    Some tools focused on a rapid multidisciplinary optimization capability for multistage launch vehicle design were developed at EADS-LV. These tools may be broken down into two categories, those related to propulsion design optimization and a computer code devoted to trajectories and under constraints optimization. Both are linked in order to obtain optimal vehicle design after an iterative process. After a description of the two categories tools, an example of application is given on a small space launcher.

  6. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  7. Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.; Wedan, B. W.

    1988-01-01

    A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.

  8. Time-optimal control with finite bandwidth

    NASA Astrophysics Data System (ADS)

    Hirose, M.; Cappellaro, P.

    2018-04-01

    Time-optimal control theory provides recipes to achieve quantum operations with high fidelity and speed, as required in quantum technologies such as quantum sensing and computation. While technical advances have achieved the ultrastrong driving regime in many physical systems, these capabilities have yet to be fully exploited for the precise control of quantum systems, as other limitations, such as the generation of higher harmonics or the finite response time of the control apparatus, prevent the implementation of theoretical time-optimal control. Here we present a method to achieve time-optimal control of qubit systems that can take advantage of fast driving beyond the rotating wave approximation. We exploit results from time-optimal control theory to design driving protocols that can be implemented with realistic, finite-bandwidth control fields, and we find a relationship between bandwidth limitations and achievable control fidelity.

  9. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  10. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  11. Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George

    2016-11-01

    We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.

  12. Optimized blind gamma-ray pulsar searches at fixed computing budget

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less

  13. Quantum Drude friction for time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Neuhauser, Daniel; Lopata, Kenneth

    2008-10-01

    Friction is a desired property in quantum dynamics as it allows for localization, prevents backscattering, and is essential in the description of multistage transfer. Practical approaches for friction generally involve memory functionals or interactions with system baths. Here, we start by requiring that a friction term will always reduce the energy of the system; we show that this is automatically true once the Hamiltonian is augmented by a term of the form ∫a(q ;n0)[∂j(q,t)/∂t]ṡJ(q)dq, which includes the current operator times the derivative of its expectation value with respect to time, times a local coefficient; the local coefficient will be fitted to experiment, to more sophisticated theories of electron-electron interaction and interaction with nuclear vibrations and the nuclear background, or alternately, will be artificially constructed to prevent backscattering of energy. We relate this term to previous results and to optimal control studies, and generalize it to further operators, i.e., any operator of the form ∫a(q ;n0)[∂c(q,t)/∂t]ṡC(q)dq (or a discrete sum) will yield friction. Simulations of a small jellium cluster, both in the linear and highly nonlinear excitation regime, demonstrate that the friction always reduces energy. The energy damping is essentially double exponential; the long-time decay is almost an order of magnitude slower than the rapid short-time decay. The friction term stabilizes the propagation (split-operator propagator here), therefore increasing the time-step needed for convergence, i.e., reducing the overall computational cost. The local friction also allows the simulation of a metal cluster in a uniform jellium as the energy loss in the excitation due to the underlying corrugation is accounted for by the friction. We also relate the friction to models of coupling to damped harmonic oscillators, which can be used for a more sophisticated description of the coupling, and to memory functionals. Our results open the way to very simple finite grid description of scattering and multistage conductance using time-dependent density functional theory away from the linear regime, just as absorbing potentials and self-energies are useful for noninteracting systems and leads.

  14. Inspection logistics planning for multi-stage production systems with applications to semiconductor fabrication lines

    NASA Astrophysics Data System (ADS)

    Chen, Kyle Dakai

    Since the market for semiconductor products has become more lucrative and competitive, research into improving yields for semiconductor fabrication lines has lately received a tremendous amount of attention. One of the most critical tasks in achieving such yield improvements is to plan the in-line inspection sampling efficiently so that any potential yield problems can be detected early and eliminated quickly. We formulate a multi-stage inspection planning model based on configurations in actual semiconductor fabrication lines, specifically taking into account both the capacity constraint and the congestion effects at the inspection station. We propose a new mixed First-Come-First-Serve (FCFS) and Last-Come-First-Serve (LCFS) discipline for serving the inspection samples to expedite the detection of potential yield problems. Employing this mixed FCFS and LCFS discipline, we derive approximate expressions for the queueing delays in yield problem detection time and develop near-optimal algorithms to obtain the inspection logistics planning policies. We also investigate the queueing performance with this mixed type of service discipline under different assumptions and configurations. In addition, we conduct numerical tests and generate managerial insights based on input data from actual semiconductor fabrication lines. To the best of our knowledge, this research is novel in developing, for the first time in the literature, near-optimal results for inspection logistics planning in multi-stage production systems with congestion effects explicitly considered.

  15. Generation Expansion Planning With Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui

    Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less

  16. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  17. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  18. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  19. Optimization Strategies for Single-Stage, Multi-Stage and Continuous ADRs

    NASA Technical Reports Server (NTRS)

    Shirron, Peter J.

    2014-01-01

    Adiabatic Demagnetization Refrigerators (ADR) have many advantages that are prompting a resurgence in their use in spaceflight and laboratory applications. They are solid-state coolers capable of very high efficiency and very wide operating range. However, their low energy storage density translates to larger mass for a given cooling capacity than is possible with other refrigeration techniques. The interplay between refrigerant mass and other parameters such as magnetic field and heat transfer points in multi-stage ADRs gives rise to a wide parameter space for optimization. This paper first presents optimization strategies for single ADR stages, focusing primarily on obtaining the largest cooling capacity per stage mass, then discusses the optimization of multi-stage and continuous ADRs in the context of the coordinated heat transfer that must occur between stages. The goal for the latter is usually to obtain the largest cooling power per mass or volume, but there can also be many secondary objectives, such as limiting instantaneous heat rejection rates and producing intermediate temperatures for cooling of other instrument components.

  20. Adaptive disturbance compensation finite control set optimal control for PMSM systems based on sliding mode extended state observer

    NASA Astrophysics Data System (ADS)

    Wu, Yun-jie; Li, Guo-fei

    2018-01-01

    Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.

  1. From Finite Time to Finite Physical Dimensions Thermodynamics: The Carnot Engine and Onsager's Relations Revisited

    NASA Astrophysics Data System (ADS)

    Feidt, Michel; Costea, Monica

    2018-04-01

    Many works have been devoted to finite time thermodynamics since the Curzon and Ahlborn [1] contribution, which is generally considered as its origin. Nevertheless, previous works in this domain have been revealed [2], [3], and recently, results of the attempt to correlate Finite Time Thermodynamics with Linear Irreversible Thermodynamics according to Onsager's theory were reported [4]. The aim of the present paper is to extend and improve the approach relative to thermodynamic optimization of generic objective functions of a Carnot engine with linear response regime presented in [4]. The case study of the Carnot engine is revisited within the steady state hypothesis, when non-adiabaticity of the system is considered, and heat loss is accounted for by an overall heat leak between the engine heat reservoirs. The optimization is focused on the main objective functions connected to engineering conditions, namely maximum efficiency or power output, except the one relative to entropy that is more fundamental. Results given in reference [4] relative to the maximum power output and minimum entropy production as objective function are reconsidered and clarified, and the change from finite time to finite physical dimension was shown to be done by the heat flow rate at the source. Our modeling has led to new results of the Carnot engine optimization and proved that the primary interest for an engineer is mainly connected to what we called Finite Physical Dimensions Optimal Thermodynamics.

  2. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  3. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performedmore » separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.« less

  4. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    NASA Astrophysics Data System (ADS)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  5. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  6. Investigation of supersonic chemically reacting and radiating channel flow

    NASA Technical Reports Server (NTRS)

    Mani, Mortaza; Tiwari, Surendra N.

    1988-01-01

    The 2-D time-dependent Navier-Stokes equations are used to investigate supersonic flows undergoing finite rate chemical reaction and radiation interaction for a hydrogen-air system. The explicit multistage finite volume technique of Jameson is used to advance the governing equations in time until convergence is achieved. The chemistry source term in the species equation is treated implicitly to alleviate the stiffness associated with fast reactions. The multidimensional radiative transfer equations for a nongray model are provided for a general configuration and then reduced for a planar geometry. Both pseudo-gray and nongray models are used to represent the absorption-emission characteristics of the participating species. The supersonic inviscid and viscous, nonreacting flows are solved by employing the finite volume technique of Jameson and the unsplit finite difference scheme of MacCormack. The specified problem considered is of the flow in a channel with a 10 deg compression-expansion ramp. The calculated results are compared with those of an upwind scheme. The problem of chemically reacting and radiating flows are solved for the flow of premixed hydrogen-air through a channel with parallel boundaries, and a channel with a compression corner. Results obtained for specific conditions indicate that the radiative interaction can have a significant influence on the entire flow field.

  7. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.

  8. Linear quadratic tracking problems in Hilbert space - Application to optimal active noise suppression

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.

    1989-01-01

    A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.

  9. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  10. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  11. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  12. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  13. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  14. Finite burn maneuver modeling for a generalized spacecraft trajectory design and optimization system.

    PubMed

    Ocampo, Cesar

    2004-05-01

    The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.

  15. A generalization of Fatou's lemma for extended real-valued functions on σ-finite measure spaces: with an application to infinite-horizon optimization in discrete time.

    PubMed

    Kamihigashi, Takashi

    2017-01-01

    Given a sequence [Formula: see text] of measurable functions on a σ -finite measure space such that the integral of each [Formula: see text] as well as that of [Formula: see text] exists in [Formula: see text], we provide a sufficient condition for the following inequality to hold: [Formula: see text] Our condition is considerably weaker than sufficient conditions known in the literature such as uniform integrability (in the case of a finite measure) and equi-integrability. As an application, we obtain a new result on the existence of an optimal path for deterministic infinite-horizon optimization problems in discrete time.

  16. Optimal Testlet Pool Assembly for Multistage Testing Designs

    ERIC Educational Resources Information Center

    Ariel, Adelaide; Veldkamp, Bernard P.; Breithaupt, Krista

    2006-01-01

    Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST…

  17. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  18. Optimal Consumption in a Brownian Model with Absorption and Finite Time Horizon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandits, Peter, E-mail: pgrand@fam.tuwien.ac.at

    2013-04-15

    We construct {epsilon}-optimal strategies for the following control problem: Maximize E[{integral}{sub [0,{tau})}e{sup -{beta}s} dC{sub s}+e{sup -{beta}{tau}}X{sub {tau}}] , where X{sub t}=x+{mu}t+{sigma}W{sub t}-C{sub t}, {tau}{identical_to}inf{l_brace}t>0|X{sub t}=0{r_brace} Logical-And T, T>0 is a fixed finite time horizon, W{sub t} is standard Brownian motion, {mu}, {sigma} are constants, and C{sub t} describes accumulated consumption until time t. It is shown that {epsilon}-optimal strategies are given by barrier strategies with time-dependent barriers.

  19. A Top-Down Approach to Designing the Computerized Adaptive Multistage Test

    ERIC Educational Resources Information Center

    Luo, Xiao; Kim, Doyoung

    2018-01-01

    The top-down approach to designing a multistage test is relatively understudied in the literature and underused in research and practice. This study introduced a route-based top-down design approach that directly sets design parameters at the test level and utilizes the advanced automated test assembly algorithm seeking global optimality. The…

  20. Multi-stage flash degaser

    DOEpatents

    Rapier, P.M.

    1980-06-26

    A multi-stage flash degaser is incorporated in an energy conversion system having a direct-contact, binary-fluid heat exchanger to remove essentially all of the noncondensable gases from geothermal brine ahead of the direct-contact binary-fluid heat exchanger in order that the heat exchanger and a turbine and condenser of the system can operate at optimal efficiency.

  1. A weak Hamiltonian finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Calise, Anthony J.; Bless, Robert R.; Leung, Martin

    1989-01-01

    A temporal finite-element method based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables, which are expanded in terms of nodal values and simple shape functions. Time derivatives of the states and costates do not appear in the governing variational equation; the only quantities whose time derivatives appear therein are virtual states and virtual costates. Numerical results are presented for an elementary trajectory optimization problem; they show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The feasibility of this approach for real-time guidance applications is evaluated. A simplified model for an advanced launch vehicle application that is suitable for finite-element solution is presented.

  2. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  3. A finite element based method for solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.

    1989-01-01

    A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.

  4. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  5. Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems

    DTIC Science & Technology

    2006-01-01

    i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback

  6. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  7. Performance of discrete heat engines and heat pumps in finite time

    PubMed

    Feldmann; Kosloff

    2000-05-01

    The performance in finite time of a discrete heat engine with internal friction is analyzed. The working fluid of the engine is composed of an ensemble of noninteracting two level systems. External work is applied by changing the external field and thus the internal energy levels. The friction induces a minimal cycle time. The power output of the engine is optimized with respect to time allocation between the contact time with the hot and cold baths as well as the adiabats. The engine's performance is also optimized with respect to the external fields. By reversing the cycle of operation a heat pump is constructed. The performance of the engine as a heat pump is also optimized. By varying the time allocation between the adiabats and the contact time with the reservoir a universal behavior can be identified. The optimal performance of the engine when the cold bath is approaching absolute zero is studied. It is found that the optimal cooling rate converges linearly to zero when the temperature approaches absolute zero.

  8. Modifications of ORNL's computer programs MSF-21 and VTE-21 for the evaluation and rapid optimization of multistage flash and vertical tube evaporators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glueckstern, P.; Wilson, J.V.; Reed, S.A.

    1976-06-01

    Design and cost modifications were made to ORNL's Computer Programs MSF-21 and VTE-21 originally developed for the rapid calculation and design optimization of multistage flash (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. The modifications include additional design options to make possible the evaluation of desalting plants based on current technology (the original programs were based on conceptual designs applying advanced and not yet proven technological developments and design features) and new materials and equipment costs updated to mid-1975.

  9. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  10. Multi-stage flash degaser

    DOEpatents

    Rapier, Pascal M.

    1982-01-01

    A multi-stage flash degaser (18) is incorporated in an energy conversion system (10) having a direct-contact, binary-fluid heat exchanger to remove essentially all of the noncondensable gases from geothermal brine ahead of the direct-contact binary-fluid heat exchanger (22) in order that the heat exchanger (22) and a turbine (48) and condenser (32) of the system (10) can operate at optimal efficiency.

  11. A hybrid symbolic/finite-element algorithm for solving nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1991-01-01

    The general code described is capable of solving difficult nonlinear optimal control problems by using finite elements and a symbolic manipulator. Quick and accurate solutions are obtained with a minimum for user interaction. Since no user programming is required for most problems, there are tremendous savings to be gained in terms of time and money.

  12. Numerical integration and optimization of motions for multibody dynamic systems

    NASA Astrophysics Data System (ADS)

    Aguilar Mayans, Joan

    This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.

  13. Finite-time H∞ control for a class of discrete-time switched time-delay systems with quantized feedback

    NASA Astrophysics Data System (ADS)

    Song, Haiyu; Yu, Li; Zhang, Dan; Zhang, Wen-An

    2012-12-01

    This paper is concerned with the finite-time quantized H∞ control problem for a class of discrete-time switched time-delay systems with time-varying exogenous disturbances. By using the sector bound approach and the average dwell time method, sufficient conditions are derived for the switched system to be finite-time bounded and ensure a prescribed H∞ disturbance attenuation level, and a mode-dependent quantized state feedback controller is designed by solving an optimization problem. Two illustrative examples are provided to demonstrate the effectiveness of the proposed theoretical results.

  14. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  15. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  16. System, methods and apparatus for program optimization for multi-threaded processor architectures

    DOEpatents

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  17. Finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.; Calise, Anthony J.; Leung, Martin

    1992-01-01

    A temporal finite element based on a mixed form of Hamilton's weak principle is summarized for optimal control problems. The resulting weak Hamiltonian finite element method is extended to allow for discontinuities in the states and/or discontinuities in the system equations. An extension of the formulation to allow for control inequality constraints is also presented. The formulation does not require element quadrature, and it produces a sparse system of nonlinear algebraic equations. To evaluate its feasibility for real-time guidance applications, this approach is applied to the trajectory optimization of a four-state, two-stage model with inequality constraints for an advanced launch vehicle. Numerical results for this model are presented and compared to results from a multiple-shooting code. The results show the accuracy and computational efficiency of the finite element method.

  18. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  19. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  20. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    PubMed

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Long-term Outcomes With Planned Multistage Reduced Dose Repeat Stereotactic Radiosurgery for Treatment of Inoperable High-Grade Arteriovenous Malformations: An Observational Retrospective Cohort Study.

    PubMed

    Marciscano, Ariel E; Huang, Judy; Tamargo, Rafael J; Hu, Chen; Khattab, Mohamed H; Aggarwal, Sameer; Lim, Michael; Redmond, Kristin J; Rigamonti, Daniele; Kleinberg, Lawrence R

    2017-07-01

    There is no consensus regarding the optimal management of inoperable high-grade arteriovenous malformations (AVMs). This long-term study of 42 patients with high-grade AVMs reports obliteration and adverse event (AE) rates using planned multistage repeat stereotactic radiosurgery (SRS). To evaluate the efficacy and safety of multistage SRS with treatment of the entire AVM nidus at each treatment session to achieve complete obliteration of high-grade AVMs. Patients with high-grade Spetzler-Martin (S-M) III-V AVMs treated with at least 2 multistage SRS treatments from 1989 to 2013. Clinical outcomes of obliteration rate, minor/major AEs, and treatment characteristics were collected. Forty-two patients met inclusion criteria (n = 26, S-M III; n = 13, S-M IV; n = 3, S-M V) with a median follow-up was 9.5 yr after first SRS. Median number of SRS treatment stages was 2, and median interval between stages was 3.5 yr. Twenty-two patients underwent pre-SRS embolization. Complete AVM obliteration rate was 38%, and the median time to obliteration was 9.7 yr. On multivariate analysis, higher S-M grade was significantly associated ( P = .04) failure to achieve obliteration. Twenty-seven post-SRS AEs were observed, and the post-SRS intracranial hemorrhage rate was 0.027 events per patient year. Treatment of high-grade AVMs with multistage SRS achieves AVM obliteration in a meaningful proportion of patients with acceptable AE rates. Lower obliteration rates were associated with higher S-M grade and pre-SRS embolization. This approach should be considered with caution, as partial obliteration does not protect from hemorrhage. Copyright © 2017 by the Congress of Neurological Surgeons

  2. Separation Control in a Multistage Compressor Using Impulsive Surface Injection

    NASA Technical Reports Server (NTRS)

    Wundrow, David W.; Braunscheidel, Edward P.; Culley, Dennis E.; Bright, Michelle M.

    2006-01-01

    Control of flow separation using impulsive surface injection is investigated within the multistage environment of a low speed axial-flow compressor. Measured wake profiles behind a set of embedded stator vanes treated with suction-surface injection indicate significant reduction in flow separation at a variety of injection-pulse repetition rates and durations. The corresponding total pressure losses across the vanes reveal a bank of repetition rates at each pulse duration where the separation control remains nearly complete. This persistence allows for demands on the injected-mass delivery system to be economized while still achieving effective flow control. The response of the stator-vane boundary layers to infrequently applied short injection pulses is described in terms of the periodic excitation of turbulent strips whose growth and propagation characteristics dictate the lower bound on the band of optimal pulse repetition rates. The eventual falloff in separation control at higher repetition rates is linked to a competition between the benefits of pulse-induced mixing and the aggravation caused by the periodic introduction of low-momentum fluid. Use of these observations for impulsive actuator design is discussed and their impact on modeling the time-average effect of impulsive surface injection for multistage steady-flow simulation is considered.

  3. MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes

    USGS Publications Warehouse

    Williams, B.K.

    1988-01-01

    Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

  4. Robust fuel- and time-optimal control of uncertain flexible space structures

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken

    1993-01-01

    The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.

  5. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  6. Finite horizon optimum control with and without a scrap value

    NASA Astrophysics Data System (ADS)

    Neck, R.; Blueschke-Nikolaeva, V.; Blueschke, D.

    2017-06-01

    In this paper, we study the effects of scrap values on the solutions of optimal control problems with finite time horizon. We show how to include a scrap value, either for the state variables or for the state and the control variables, in the OPTCON2 algorithm for the optimal control of dynamic economic systems. We ask whether the introduction of a scrap value can serve as a substitute for an infinite horizon in economic policy optimization problems where the latter option is not available. Using a simple numerical macroeconomic model, we demonstrate that the introduction of a scrap value cannot induce control policies which can be expected for problems with an infinite time horizon.

  7. Tetrahedron Formation Control

    NASA Technical Reports Server (NTRS)

    Petruzzo, Charles; Guzman, Jose

    2004-01-01

    This paper considers the preliminary development of a general optimization procedure for tetrahedron formation control. The maneuvers are assumed to be impulsive and a multi-stage optimization method is employed. The stages include (1) targeting to a fixed tetrahedron location and orientation, and (2) rotating and translating the tetrahedron. The number of impulsive maneuvers can also be varied. As the impulse locations and times change, new arcs are computed using a differential corrections scheme that varies the impulse magnitudes and directions. The result is a continuous trajectory with velocity discontinuities. The velocity discontinuities are then used to formulate the cost function. Direct optimization techniques are employed. The procedure is applied to the NASA Goddard Magnetospheric Multi-Scale (MMS) mission to compute preliminary formation control fuel requirements.

  8. Conical Euler solution for a highly-swept delta wing undergoing wing-rock motion

    NASA Technical Reports Server (NTRS)

    Lee, Elizabeth M.; Batina, John T.

    1990-01-01

    Modifications to an unsteady conical Euler code for the free-to-roll analysis of highly-swept delta wings are described. The modifications involve the addition of the rolling rigid-body equation of motion for its simultaneous time-integration with the governing flow equations. The flow solver utilized in the Euler code includes a multistage Runge-Kutta time-stepping scheme which uses a finite-volume spatial discretization on an unstructured mesh made up of triangles. Steady and unsteady results are presented for a 75 deg swept delta wing at a freestream Mach number of 1.2 and an angle of attack of 30 deg. The unsteady results consist of forced harmonic and free-to-roll calculations. The free-to-roll case exhibits a wing rock response produced by unsteady aerodynamics consistent with the aerodynamics of the forced harmonic results. Similarities are shown with a wing-rock time history from a low-speed wind tunnel test.

  9. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Discontinuous Galerkin Finite Element Method for Parabolic Problems

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.

  11. Hydraulic design to optimize the treatment capacity of Multi-Stage Filtration units

    NASA Astrophysics Data System (ADS)

    Mushila, C. N.; Ochieng, G. M.; Otieno, F. A. O.; Shitote, S. M.; Sitters, C. W.

    2016-04-01

    Multi-Stage Filtration (MSF) can provide a robust treatment alternative for surface water sources of variable water quality in rural communities at low operation and maintenance costs. MSF is a combination of Slow Sand Filters (SSFs) and Pre-treatment systems. The general objective of this research was to optimize the treatment capacity of MSF. A pilot plant study was undertaken to meet this objective. The pilot plant was monitored for a continuous 98 days from commissioning till the end of the project. Three main stages of MSF namely: The Dynamic Gravel Filter (DGF), Horizontal-flow Roughing Filter (HRF) and SSF were identified, designed and built. The response of the respective MSF units in removal of selected parameters guiding drinking water quality such as microbiological (Faecal and Total coliform), Suspended Solids, Turbidity, PH, Temperature, Iron and Manganese was investigated. The benchmark was the Kenya Bureau (KEBS) and World Health Organization (WHO) Standards for drinking water quality. With respect to microbiological raw water quality improvement, MSF units achieved on average 98% Faecal and 96% Total coliform removal. Results obtained indicate that implementation of MSF in rural communities has the potential to increase access to portable water to the rural populace with a probable consequent decrease in waterborne diseases. With a reduced down time due to illness, more time would be spent in undertaking other economic activities.

  12. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound.

    PubMed

    Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai

    2011-01-01

    In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.

  13. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  14. Influence of dispatching rules on average production lead time for multi-stage production systems.

    PubMed

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  15. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

  16. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  17. Comparison of Several Dissipation Algorithms for Central Difference Schemes

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Radespiel, R.; Turkel, E.

    1997-01-01

    Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.

  18. Comparison of two- and three-dimensional Navier-Stokes solutions with NASA experimental data for CAST-10 airfoil

    NASA Technical Reports Server (NTRS)

    Swanson, R. Charles; Radespiel, Rolf; Mccormick, V. Edward

    1989-01-01

    The two-dimensional (2-D) and three-dimensional Navier-Stokes equations are solved for flow over a NAE CAST-10 airfoil model. Recently developed finite-volume codes that apply a multistage time stepping scheme in conjunction with steady state acceleration techniques are used to solve the equations. Two-dimensional results are shown for flow conditions uncorrected and corrected for wind tunnel wall interference effects. Predicted surface pressures from 3-D simulations are compared with those from 2-D calculations. The focus of the 3-D computations is the influence of the sidewall boundary layers. Topological features of the 3-D flow fields are indicated. Lift and drag results are compared with experimental measurements.

  19. Optimal search strategies of space-time coupled random walkers with finite lifetimes

    NASA Astrophysics Data System (ADS)

    Campos, D.; Abad, E.; Méndez, V.; Yuste, S. B.; Lindenberg, K.

    2015-05-01

    We present a simple paradigm for detection of an immobile target by a space-time coupled random walker with a finite lifetime. The motion of the walker is characterized by linear displacements at a fixed speed and exponentially distributed duration, interrupted by random changes in the direction of motion and resumption of motion in the new direction with the same speed. We call these walkers "mortal creepers." A mortal creeper may die at any time during its motion according to an exponential decay law characterized by a finite mean death rate ωm. While still alive, the creeper has a finite mean frequency ω of change of the direction of motion. In particular, we consider the efficiency of the target search process, characterized by the probability that the creeper will eventually detect the target. Analytic results confirmed by numerical results show that there is an ωm-dependent optimal frequency ω =ωopt that maximizes the probability of eventual target detection. We work primarily in one-dimensional (d =1 ) domains and examine the role of initial conditions and of finite domain sizes. Numerical results in d =2 domains confirm the existence of an optimal frequency of change of direction, thereby suggesting that the observed effects are robust to changes in dimensionality. In the d =1 case, explicit expressions for the probability of target detection in the long time limit are given. In the case of an infinite domain, we compute the detection probability for arbitrary times and study its early- and late-time behavior. We further consider the survival probability of the target in the presence of many independent creepers beginning their motion at the same location and at the same time. We also consider a version of the standard "target problem" in which many creepers start at random locations at the same time.

  20. Performance Improvement of a Return Channel in a Multistage Centrifugal Compressor Using Multiobjective Optimization.

    PubMed

    Nishida, Yoshifumi; Kobayashi, Hiromi; Nishida, Hideo; Sugimura, Kazuyuki

    2013-05-01

    The effect of the design parameters of a return channel on the performance of a multistage centrifugal compressor was numerically investigated, and the shape of the return channel was optimized using a multiobjective optimization method based on a genetic algorithm to improve the performance of the centrifugal compressor. The results of sensitivity analysis using Latin hypercube sampling suggested that the inlet-to-outlet area ratio of the return vane affected the total pressure loss in the return channel, and that the inlet-to-outlet radius ratio of the return vane affected the outlet flow angle from the return vane. Moreover, this analysis suggested that the number of return vanes affected both the loss and the flow angle at the outlet. As a result of optimization, the number of return vane was increased from 14 to 22 and the area ratio was decreased from 0.71 to 0.66. The radius ratio was also decreased from 2.1 to 2.0. Performance tests on a centrifugal compressor with two return channels (the original design and optimized design) were carried out using two-stage test apparatus. The measured flow distribution exhibited a swirl flow in the center region and a reversed swirl flow near the hub and shroud sides. The exit flow of the optimized design was more uniform than that of the original design. For the optimized design, the overall two-stage efficiency and pressure coefficient were increased by 0.7% and 1.5%, respectively. Moreover, the second-stage efficiency and pressure coefficient were respectively increased by 1.0% and 3.2%. It is considered that the increase in the second-stage efficiency was caused by the increased uniformity of the flow, and the rise in the pressure coefficient was caused by a decrease in the residual swirl flow. It was thus concluded from the numerical and experimental results that the optimized return channel improved the performance of the multistage centrifugal compressor.

  1. Optimum element density studies for finite-element thermal analysis of hypersonic aircraft structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy; Muramoto, Kyle M.

    1990-01-01

    Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.

  2. A Bayesian multi-stage cost-effectiveness design for animal studies in stroke research

    PubMed Central

    Cai, Chunyan; Ning, Jing; Huang, Xuelin

    2017-01-01

    Much progress has been made in the area of adaptive designs for clinical trials. However, little has been done regarding adaptive designs to identify optimal treatment strategies in animal studies. Motivated by an animal study of a novel strategy for treating strokes, we propose a Bayesian multi-stage cost-effectiveness design to simultaneously identify the optimal dose and determine the therapeutic treatment window for administrating the experimental agent. We consider a non-monotonic pattern for the dose-schedule-efficacy relationship and develop an adaptive shrinkage algorithm to assign more cohorts to admissible strategies. We conduct simulation studies to evaluate the performance of the proposed design by comparing it with two standard designs. These simulation studies show that the proposed design yields a significantly higher probability of selecting the optimal strategy, while it is generally more efficient and practical in terms of resource usage. PMID:27405325

  3. Fractional Multistage Hydrothermal Liquefaction of Biomass and Catalytic Conversion into Hydrocarbons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortright, Randy; Rozmiarek, Robert; Dally, Brice

    2017-08-31

    The objective of this project was to develop an improved multistage process for the hydrothermal liquefaction (HTL) of biomass to serve as a new front-end, deconstruction process ideally suited to feed Virent’s well-proven catalytic technology, which is already being scaled up. This process produced water soluble, partially de-oxygenated intermediates that are ideally suited for catalytic finishing to fungible distillate hydrocarbons. Through this project, Virent, with its partners, demonstrated the conversion of pine wood chips to drop-in hydrocarbon distillate fuels using a multi-stage fractional conversion system that is integrated with Virent’s BioForming® process. The majority of work was in the liquefactionmore » task and included temperature scoping, solvent optimization, and separations.« less

  4. Optimal placement of water-lubricated rubber bearings for vibration reduction of flexible multistage rotor systems

    NASA Astrophysics Data System (ADS)

    Liu, Shibing; Yang, Bingen

    2017-10-01

    Flexible multistage rotor systems with water-lubricated rubber bearings (WLRBs) have a variety of engineering applications. Filling a technical gap in the literature, this effort proposes a method of optimal bearing placement that minimizes the vibration amplitude of a WLRB-supported flexible rotor system with a minimum number of bearings. In the development, a new model of WLRBs and a distributed transfer function formulation are used to define a mixed continuous-and-discrete optimization problem. To deal with the case of uncertain number of WLRBs in rotor design, a virtual bearing method is devised. Solution of the optimization problem by a real-coded genetic algorithm yields the locations and lengths of water-lubricated rubber bearings, by which the prescribed operational requirements for the rotor system are satisfied. The proposed method is applicable either to preliminary design of a new rotor system with the number of bearings unforeknown or to redesign of an existing rotor system with a given number of bearings. Numerical examples show that the proposed optimal bearing placement is efficient, accurate and versatile in different design cases.

  5. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.

  6. Applying a punch with microridges in multistage deep drawing processes.

    PubMed

    Lin, Bor-Tsuen; Yang, Cheng-Yu

    2016-01-01

    The developers of high aspect ratio components aim to minimize the processing stages in deep drawing processes. This study elucidates the application of microridge punches in multistage deep drawing processes. A microridge punch improves drawing performance, thereby reducing the number of stages required in deep forming processes. As an example, the original eight-stage deep forming process for a copper cylindrical cup with a high aspect ratio was analyzed by finite element simulation. Microridge punch designs were introduced in Stages 4 and 7 to replace the original punches. In addition, Stages 3 and 6 were eliminated. Finally, these changes were verified through experiments. The results showed that the microridge punches reduced the number of deep drawing stages yielding similar thickness difference percentages. Further, the numerical and experimental results demonstrated good consistency in the thickness distribution.

  7. Optimal control of first order distributed systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Johnson, T. L.

    1972-01-01

    The problem of characterizing optimal controls for a class of distributed-parameter systems is considered. The system dynamics are characterized mathematically by a finite number of coupled partial differential equations involving first-order time and space derivatives of the state variables, which are constrained at the boundary by a finite number of algebraic relations. Multiple control inputs, extending over the entire spatial region occupied by the system ("distributed controls') are to be designed so that the response of the system is optimal. A major example involving boundary control of an unstable low-density plasma is developed from physical laws.

  8. An explicit solution to the exoatmospheric powered flight guidance and trajectory optimization problem for rocket propelled vehicles

    NASA Technical Reports Server (NTRS)

    Jaggers, R. F.

    1977-01-01

    A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.

  9. Comparison of cell centered and cell vertex scheme in the calculation of high speed compressible flows

    NASA Astrophysics Data System (ADS)

    Rahman, Syazila; Yusoff, Mohd. Zamri; Hasini, Hasril

    2012-06-01

    This paper describes the comparison between the cell centered scheme and cell vertex scheme in the calculation of high speed compressible flow properties. The calculation is carried out using Computational Fluid Dynamic (CFD) in which the mass, momentum and energy equations are solved simultaneously over the flow domain. The geometry under investigation consists of a Binnie and Green convergent-divergent nozzle and structured mesh scheme is implemented throughout the flow domain. The finite volume CFD solver employs second-order accurate central differencing scheme for spatial discretization. In addition, the second-order accurate cell-vertex finite volume spatial discretization is also introduced in this case for comparison. The multi-stage Runge-Kutta time integration is implemented for solving a set of non-linear governing equations with variables stored at the vertices. Artificial dissipations used second and fourth order terms with pressure switch to detect changes in pressure gradient. This is important to control the solution stability and capture shock discontinuity. The result is compared with experimental measurement and good agreement is obtained for both cases.

  10. An assessment of viscous effects in computational simulation of benign and burst vortex flows on generic fighter wind-tunnel models using TEAM code

    NASA Technical Reports Server (NTRS)

    Kinard, Tim A.; Harris, Brenda W.; Raj, Pradeep

    1995-01-01

    Vortex flows on a twin-tail and a single-tail modular transonic vortex interaction (MTVI) model, representative of a generic fighter configuration, are computationally simulated in this study using the Three-dimensional Euler/Navier-Stokes Aerodynamic Method (TEAM). The primary objective is to provide an assessment of viscous effects on benign (10 deg angle of attack) and burst (35 deg angle of attack) vortex flow solutions. This study was conducted in support of a NASA project aimed at assessing the viability of using Euler technology to predict aerodynamic characteristics of aircraft configurations at moderate-to-high angles of attack in a preliminary design environment. The TEAM code solves the Euler and Reynolds-average Navier-Stokes equations on patched multiblock structured grids. Its algorithm is based on a cell-centered finite-volume formulation with multistage time-stepping scheme. Viscous effects are assessed by comparing the computed inviscid and viscous solutions with each other and experimental data. Also, results of Euler solution sensitivity to grid density and numerical dissipation are presented for the twin-tail model. The results show that proper accounting of viscous effects is necessary for detailed design and optimization but Euler solutions can provide meaningful guidelines for preliminary design of flight vehicles which exhibit vortex flows in parts of their flight envelope.

  11. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  12. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  13. Optimal design of gas adsorption refrigerators for cryogenic cooling

    NASA Technical Reports Server (NTRS)

    Chan, C. K.

    1983-01-01

    The design of gas adsorption refrigerators used for cryogenic cooling in the temperature range of 4K to 120K was examined. The functional relationships among the power requirement for the refrigerator, the system mass, the cycle time and the operating conditions were derived. It was found that the precool temperature, the temperature dependent heat capacities and thermal conductivities, and pressure and temperature variations in the compressors have important impacts on the cooling performance. Optimal designs based on a minimum power criterion were performed for four different gas adsorption refrigerators and a multistage system. It is concluded that the estimates of the power required and the system mass are within manageable limits in various spacecraft environments.

  14. Experimental and Model Studies on Continuous Separation of 2-Phenylpropionic Acid Enantiomers by Enantioselective Liquid-Liquid Extraction in Centrifugal Contactor Separators.

    PubMed

    Feng, Xiaofeng; Tang, Kewen; Zhang, Pangliang; Yin, Shuangfeng

    2016-03-01

    Multistage enantioselective liquid-liquid extraction (ELLE) of 2-phenylpropionic acid (2-PPA) enantiomers using hydroxypropyl-β-cyclodextrin (HP-β-CD) as extractant was studied experimentally in a counter-current cascade of centrifugal contactor separators (CCSs). Performance of the process was evaluated by purity (enantiomeric excess, ee) and yield (Y). A multistage equilibrium model was established on the basis of single-stage model for chiral extraction of 2-PPA enantiomers and the law of mass conservation. A series of experiments on the extract phase/washing phase ratio (W/O ratio), extractant concentration, the pH value of aqueous phase, and the number of stages was conducted to verify the multistage equilibrium model. It was found that model predictions were in good agreement with the experimental results. The model was applied to predict and optimize the symmetrical separation of 2-PPA enantiomers. The optimal conditions for symmetric separation involves a W/O ratio of 0.6, pH of 2.5, and HP-β-CD concentration of 0.1 mol L(-1) at a temperature of 278 K, where eeeq (equal enantiomeric excess) can reach up to 37% and Yeq (equal yield) to 69%. By simulation and optimization, the minimum number of stages was evaluated at 98 and 106 for eeeq > 95% and eeeq > 97%. © 2016 Wiley Periodicals, Inc.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  16. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  17. Numerical Methods for 2-Dimensional Modeling

    DTIC Science & Technology

    1980-12-01

    high-order finite element methods, and a multidimensional version of the method of lines, both utilizing an optimized stiff integrator for the time...integration. The finite element methods have proved disappointing, but the method of lines has provided an unexpectedly large gain in speed. Two...diffusion problems with the same number of unknowns (a 21 x 41 grid), solved by second-order finite element methods, took over seven minutes on the Cray-i

  18. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  19. Finite elements and finite differences for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Murman, E. M.; Wellford, L. C.

    1978-01-01

    The paper reviews the chief finite difference and finite element techniques used for numerical solution of nonlinear mixed elliptic-hyperbolic equations governing transonic flow. The forms of the governing equations for unsteady two-dimensional transonic flow considered are the Euler equation, the full potential equation in both conservative and nonconservative form, the transonic small-disturbance equation in both conservative and nonconservative form, and the hodograph equations for the small-disturbance case and the full-potential case. Finite difference methods considered include time-dependent methods, relaxation methods, semidirect methods, and hybrid methods. Finite element methods include finite element Lax-Wendroff schemes, implicit Galerkin method, mixed variational principles, dual iterative procedures, optimal control methods and least squares.

  20. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  1. The Linear Quadratic Gaussian Multistage Game with Nonclassical Information Pattern Using a Direct Solution Method

    NASA Astrophysics Data System (ADS)

    Clemens, Joshua William

    Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.

  2. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  3. Three essays on multi-level optimization models and applications

    NASA Astrophysics Data System (ADS)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation problem in each node and decreasing the number of iterations. Computational experiments show that the proposed algorithm is faster than the existing ones.

  4. Finite element based electric motor design optimization

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1993-01-01

    The purpose of this effort was to develop a finite element code for the analysis and design of permanent magnet electric motors. These motors would drive electromechanical actuators in advanced rocket engines. The actuators would control fuel valves and thrust vector control systems. Refurbishing the hydraulic systems of the Space Shuttle after each flight is costly and time consuming. Electromechanical actuators could replace hydraulics, improve system reliability, and reduce down time.

  5. Computer Program for Analysis, Design and Optimization of Propulsion, Dynamics, and Kinematics of Multistage Rockets

    NASA Astrophysics Data System (ADS)

    Lali, Mehdi

    2009-03-01

    A comprehensive computer program is designed in MATLAB to analyze, design and optimize the propulsion, dynamics, thermodynamics, and kinematics of any serial multi-staging rocket for a set of given data. The program is quite user-friendly. It comprises two main sections: "analysis and design" and "optimization." Each section has a GUI (Graphical User Interface) in which the rocket's data are entered by the user and by which the program is run. The first section analyzes the performance of the rocket that is previously devised by the user. Numerous plots and subplots are provided to display the performance of the rocket. The second section of the program finds the "optimum trajectory" via billions of iterations and computations which are done through sophisticated algorithms using numerical methods and incremental integrations. Innovative techniques are applied to calculate the optimal parameters for the engine and designing the "optimal pitch program." This computer program is stand-alone in such a way that it calculates almost every design parameter in regards to rocket propulsion and dynamics. It is meant to be used for actual launch operations as well as educational and research purposes.

  6. Tetrahedron Formation Control

    NASA Technical Reports Server (NTRS)

    Guzman, Jose J.

    2003-01-01

    Spacecraft flying in tetrahedron formations are excellent instrument platforms for electromagnetic and plasma studies. A minimum of four spacecraft - to establish a volume - is required to study some of the key regions of a planetary magnetic field. The usefulness of the measurements recorded is strongly affected by the tetrahedron orbital evolution. This paper considers the preliminary development of a general optimization procedure for tetrahedron formation control. The maneuvers are assumed to be impulsive and a multi-stage optimization method is employed. The stages include targeting to a fixed tetrahedron orientation, rotating and translating the tetrahedron and/or varying the initial and final times. The number of impulsive maneuvers citn also be varied. As the impulse locations and times change, new arcs are computed using a differential corrections scheme that varies the impulse magnitudes and directions. The result is a continuous trajectory with velocity discontinuities. The velocity discontinuities are then used to formulate the cost function. Direct optimization techniques are employed. The procedure is applied to the Magnetospheric Multiscale Mission (MMS) to compute preliminary formation control fuel requirements.

  7. Data-Driven Zero-Sum Neuro-Optimal Control for a Class of Continuous-Time Unknown Nonlinear Systems With Disturbance Using ADP.

    PubMed

    Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei

    2016-02-01

    This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.

  8. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  9. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  10. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  11. Determination of Process Parameters in Multi-Stage Hydro-Mechanical Deep Drawing by FE Simulation

    NASA Astrophysics Data System (ADS)

    Kumar, D. Ravi; Manohar, M.

    2017-09-01

    In this work, analysis has been carried to simulate manufacturing of a near hemispherical bottom part with large depth by hydro-mechanical deep drawing with an aim to reduce the number of forming steps and to reduce the extent of thinning in the dome region. Inconel 718 has been considered as the material due to its importance in aerospace industry. It is a Ni-based super alloy and it is one of the most widely used of all super alloys primarily due to large-scale applications in aircraft engines. Using Finite Element Method (FEM), numerical simulations have been carried out for multi-stage hydro-mechanical deep drawing by using the same draw ratios and design parameters as in the case of conventional deep drawing in four stages. The results showed that the minimum thickness in the final part can be increased significantly when compared to conventional deep drawing. It has been found that the part could be deep drawn to the desired height (after trimming at the final stage) without any severe wrinkling. Blank holding force (BHF) and peak counter pressure have been found to have a strong influence on thinning in the component. Decreasing the coefficient of friction has marginally increased the minimum thickness in the final component. By increasing the draw ratio and optimizing BHF, counter pressure and die corner radius in the simulations, it has been found that it is possible to draw the final part in three stages. It has been found that thinning can be further reduced by decreasing the initial blank size without any reduction in the final height. This reduced the draw ratio at every stage and optimum combination of BHF and counter pressure have been found for the 3-stage process also.

  12. Finite horizon EOQ model for non-instantaneous deteriorating items with price and advertisement dependent demand and partial backlogging under inflation

    NASA Astrophysics Data System (ADS)

    Palanivel, M.; Uthayakumar, R.

    2015-07-01

    This paper deals with an economic order quantity (EOQ) model for non-instantaneous deteriorating items with price and advertisement dependent demand pattern under the effect of inflation and time value of money over a finite planning horizon. In this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This paper aids the retailer in minimising the total inventory cost by finding the optimal interval and the optimal order quantity. An algorithm is designed to find the optimum solution of the proposed model. Numerical examples are given to demonstrate the results. Also, the effect of changes in the different parameters on the optimal total cost is graphically presented and the implications are discussed in detail.

  13. Methodology for Variable Fidelity Multistage Optimization under Uncertainty

    DTIC Science & Technology

    2011-03-31

    problem selected for the application of the new optimization methodology is a Single Stage To Orbit ( SSTO ) expendable launch vehicle (ELV). Three...the primary exercise of the variable fidelity optimization portion of the code. SSTO vehicles have been discussed almost exclusively in the context...of reusable launch vehicles (RLV). There is very little discussion in recent literature of SSTO designs which are expendable. In the light of the

  14. Dispersion-relation-preserving finite difference schemes for computational acoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1993-01-01

    Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.

  15. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  17. Closed-form recursive formula for an optimal tracker with terminal constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Turner, J. D.; Chun, H. M.

    1984-01-01

    Feedback control laws are derived for a class of optimal finite time tracking problems with terminal constraints. Analytical solutions are obtained for the feedback gain and the closed-loop response trajectory. Such formulations are expressed in recursive forms so that a real-time computer implementation becomes feasible. Two examples are given to illustrate the validity and usefulness of the formulations.

  18. The high-performance liquid chromatography/multistage electrospray mass spectrometric investigation and extraction optimization of beech (Fagus sylvatica L.) bark polyphenols.

    PubMed

    Hofmann, Tamás; Nebehaj, Esztella; Albert, Levente

    2015-05-08

    The aim of the present work was the high-performance liquid chromatographic separation and multistage mass spectrometric characterization of the polyphenolic compounds of beech bark, as well as the extraction optimization of the identified compounds. Beech is a common and widely used material in the wood industry, yet its bark is regarded as a by-product. Using appropriate extraction methods these compounds could be extracted and utilized in the future. Different extraction methods (stirring, sonication, microwave assisted extraction) using different solvents (water, methanol:water 80:20 v/v, ethanol:water 80:20 v/v) and time/temperature schedules have been compared basing on total phenol contents (Folin-Ciocâlteu) and MRM peak areas of the identified compounds to investigate optimum extraction efficiency. Altogether 37 compounds, including (+)-catechin, (-)-epicatechin, quercetin-O-hexoside, taxifolin-O-hexosides (3), taxifolin-O-pentosides (4), B-type (6) and C-type (6) procyanidins, syringic acid- and coumaric acid-di-O-glycosides, coniferyl alcohol- and sinapyl alcohol-glycosides, as well as other unknown compounds with defined [M-H](-) m/z values and MS/MS spectra have been tentatively identified. The choice of the method, solvent system and time/temperature parameters favors the extraction of different types of compounds. Pure water can extract compounds as efficiently as mixtures containing organic solvents under high-pressure and high temperature conditions. This supports the implementation of green extraction methods in the future. Extraction times that are too long and high temperatures can result in the decrease of the concentrations. Future investigations will focus on the evaluation of the antioxidant capacity and utilization possibilities of the prepared extracts. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Optimization of a Multi-Stage ATR System for Small Target Identification

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Han; Lu, Thomas; Braun, Henry; Edens, Western; Zhang, Yuhan; Chao, Tien- Hsin; Assad, Christopher; Huntsberger, Terrance

    2010-01-01

    An Automated Target Recognition system (ATR) was developed to locate and target small object in images and videos. The data is preprocessed and sent to a grayscale optical correlator (GOC) filter to identify possible regionsof- interest (ROIs). Next, features are extracted from ROIs based on Principal Component Analysis (PCA) and sent to neural network (NN) to be classified. The features are analyzed by the NN classifier indicating if each ROI contains the desired target or not. The ATR system was found useful in identifying small boats in open sea. However, due to "noisy background," such as weather conditions, background buildings, or water wakes, some false targets are mis-classified. Feedforward backpropagation and Radial Basis neural networks are optimized for generalization of representative features to reduce false-alarm rate. The neural networks are compared for their performance in classification accuracy, classifying time, and training time.

  20. LP and NLP decomposition without a master problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, D.; Lan, B.

    We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less

  1. A new implementation of the programming system for structural synthesis (PROSSS-2)

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.

    1984-01-01

    This new implementation of the PROgramming System for Structural Synthesis (PROSSS-2) combines a general-purpose finite element computer program for structural analysis, a state-of-the-art optimization program, and several user-supplied, problem-dependent computer programs. The results are flexibility of the optimization procedure, organization, and versatility of the formulation of constraints and design variables. The analysis-optimization process results in a minimized objective function, typically the mass. The analysis and optimization programs are executed repeatedly by looping through the system until the process is stopped by a user-defined termination criterion. However, some of the analysis, such as model definition, need only be one time and the results are saved for future use. The user must write some small, simple FORTRAN programs to interface between the analysis and optimization programs. One of these programs, the front processor, converts the design variables output from the optimizer into the suitable format for input into the analyzer. Another, the end processor, retrieves the behavior variables and, optionally, their gradients from the analysis program and evaluates the objective function and constraints and optionally their gradients. These quantities are output in a format suitable for input into the optimizer. These user-supplied programs are problem-dependent because they depend primarily upon which finite elements are being used in the model. PROSSS-2 differs from the original PROSSS in that the optimizer and front and end processors have been integrated into the finite element computer program. This was done to reduce the complexity and increase portability of the system, and to take advantage of the data handling features found in the finite element program.

  2. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  3. Three-Dimensional Aerodynamic Instabilities In Multi-Stage Axial Compressors

    NASA Technical Reports Server (NTRS)

    Tan, Choon S.; Gong, Yifang; Suder, Kenneth L. (Technical Monitor)

    2001-01-01

    This thesis presents the conceptualization and development of a computational model for describing three-dimensional non-linear disturbances associated with instability and inlet distortion in multistage compressors. Specifically, the model is aimed at simulating the non-linear aspects of short wavelength stall inception, part span stall cells, and compressor response to three-dimensional inlet distortions. The computed results demonstrated the first-of-a-kind capability for simulating short wavelength stall inception in multistage compressors. The adequacy of the model is demonstrated by its application to reproduce the following phenomena: (1) response of a compressor to a square-wave total pressure inlet distortion; (2) behavior of long wavelength small amplitude disturbances in compressors; (3) short wavelength stall inception in a multistage compressor and the occurrence of rotating stall inception on the negatively sloped portion of the compressor characteristic; (4) progressive stalling behavior in the first stage in a mismatched multistage compressor; (5) change of stall inception type (from modal to spike and vice versa) due to IGV stagger angle variation, and "unique rotor tip incidence" at these points where the compressor stalls through short wavelength disturbances. The model has been applied to determine the parametric dependence of instability inception behavior in terms of amplitude and spatial distribution of initial disturbance, and intra-blade-row gaps. It is found that reducing the inter-blade row gaps suppresses the growth of short wavelength disturbances. It is also concluded from these parametric investigations that each local component group (rotor and its two adjacent stators) has its own instability point (i.e. conditions at which disturbances are sustained) for short wavelength disturbances, with the instability point for the compressor set by the most unstable component group. For completeness, the methodology has been extended to describe finite amplitude disturbances in high-speed compressors. Results are presented for the response of a transonic compressor subjected to inlet distortions.

  4. Designing a multistage supply chain in cross-stage reverse logistics environments: application of particle swarm optimization algorithms.

    PubMed

    Chiang, Tzu-An; Che, Z H; Cui, Zhihua

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.

  5. Designing a Multistage Supply Chain in Cross-Stage Reverse Logistics Environments: Application of Particle Swarm Optimization Algorithms

    PubMed Central

    Chiang, Tzu-An; Che, Z. H.

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026

  6. Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted

    DTIC Science & Technology

    2016-09-16

    distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and

  7. Purification of High Salinity Brine by Multi-Stage Ion Concentration Polarization Desalination

    PubMed Central

    Kim, Bumjoo; Kwak, Rhokyun; Kwon, Hyukjin J.; Pham, Van Sang; Kim, Minseok; Al-Anzi, Bader; Lim, Geunbae; Han, Jongyoon

    2016-01-01

    There is an increasing need for the desalination of high concentration brine (>TDS 35,000 ppm) efficiently and economically, either for the treatment of produced water from shale gas/oil development, or minimizing the environmental impact of brine from existing desalination plants. Yet, reverse osmosis (RO), which is the most widely used for desalination currently, is not practical for brine desalination. This paper demonstrates technical and economic feasibility of ICP (Ion Concentration Polarization) electrical desalination for the high saline water treatment, by adopting multi-stage operation with better energy efficiency. Optimized multi-staging configurations, dependent on the brine salinity values, can be designed based on experimental and numerical analysis. Such an optimization aims at achieving not just the energy efficiency but also (membrane) area efficiency, lowering the true cost of brine treatment. ICP electrical desalination is shown here to treat brine salinity up to 100,000 ppm of Total Dissolved Solids (TDS) with flexible salt rejection rate up to 70% which is promising in a various application treating brine waste. We also demonstrate that ICP desalination has advantage of removing both salts and diverse suspended solids simultaneously, and less susceptibility to membrane fouling/scaling, which is a significant challenge in the membrane processes. PMID:27545955

  8. Purification of High Salinity Brine by Multi-Stage Ion Concentration Polarization Desalination

    NASA Astrophysics Data System (ADS)

    Kim, Bumjoo; Kwak, Rhokyun; Kwon, Hyukjin J.; Pham, Van Sang; Kim, Minseok; Al-Anzi, Bader; Lim, Geunbae; Han, Jongyoon

    2016-08-01

    There is an increasing need for the desalination of high concentration brine (>TDS 35,000 ppm) efficiently and economically, either for the treatment of produced water from shale gas/oil development, or minimizing the environmental impact of brine from existing desalination plants. Yet, reverse osmosis (RO), which is the most widely used for desalination currently, is not practical for brine desalination. This paper demonstrates technical and economic feasibility of ICP (Ion Concentration Polarization) electrical desalination for the high saline water treatment, by adopting multi-stage operation with better energy efficiency. Optimized multi-staging configurations, dependent on the brine salinity values, can be designed based on experimental and numerical analysis. Such an optimization aims at achieving not just the energy efficiency but also (membrane) area efficiency, lowering the true cost of brine treatment. ICP electrical desalination is shown here to treat brine salinity up to 100,000 ppm of Total Dissolved Solids (TDS) with flexible salt rejection rate up to 70% which is promising in a various application treating brine waste. We also demonstrate that ICP desalination has advantage of removing both salts and diverse suspended solids simultaneously, and less susceptibility to membrane fouling/scaling, which is a significant challenge in the membrane processes.

  9. Selection of the optimal completion of horizontal wells with multi-stage hydraulic fracturing of the low-permeable formation, field C

    NASA Astrophysics Data System (ADS)

    Bozoev, A. M.; Demidova, E. A.

    2016-03-01

    At the moment, many fields of Western Siberia are in the later stages of development. In this regard, the multilayer fields are actually involved in the development of hard to recover reserves by conducting well interventions. However, most of these assets may not to be economical profitable without application of horizontal drilling and multi-stage hydraulic fracturing treatment. Moreover, location of frac ports in relative to each other, number of stages, volume of proppant per one stage are the main issues due to the fact that the interference effect could lead to the loss of oil production. The optimal arrangement of horizontal wells with multi-stage hydraulic fracture was defined in this paper. Several analytical approaches have been used to predict the started oil flow rate and chose the most appropriate for field C reservoir J1. However, none of the analytical equations could not take into account the interference effect and determine the optimum number of fractures. Therefore, the simulation modelling was used. Finally, the universal equation is derived for this field C, the reservoir J1. This tool could be used to predict the flow rate of the horizontal well with hydraulic fracturing treatment on the qualitative level without simulation model.

  10. Towards Optimal Design of Cancer Nanomedicines: Multi-stage Nanoparticles for the Treatment of Solid Tumors.

    PubMed

    Stylianopoulos, Triantafyllos; Economides, Eva-Athena; Baish, James W; Fukumura, Dai; Jain, Rakesh K

    2015-09-01

    Conventional drug delivery systems for solid tumors are composed of a nano-carrier that releases its therapeutic load. These two-stage nanoparticles utilize the enhanced permeability and retention (EPR) effect to enable preferential delivery to tumor tissue. However, the size-dependency of the EPR, the limited penetration of nanoparticles into the tumor as well as the rapid binding of the particles or the released cytotoxic agents to cancer cells and stromal components inhibit the uniform distribution of the drug and the efficacy of the treatment. Here, we employ mathematical modeling to study the effect of particle size, drug release rate and binding affinity on the distribution and efficacy of nanoparticles to derive optimal design rules. Furthermore, we introduce a new multi-stage delivery system. The system consists of a 20-nm primary nanoparticle, which releases 5-nm secondary particles, which in turn release the chemotherapeutic drug. We found that tuning the drug release kinetics and binding affinities leads to improved delivery of the drug. Our results also indicate that multi-stage nanoparticles are superior over two-stage nano-carriers provided they have a faster drug release rate and for high binding affinity drugs. Furthermore, our results suggest that smaller nanoparticles achieve better treatment outcome.

  11. Unified theory for inhomogeneous thermoelectric generators and coolers including multistage devices.

    PubMed

    Gerstenmaier, York Christian; Wachutka, Gerhard

    2012-11-01

    A novel generalized Lagrange multiplier method for functional optimization with inclusion of subsidiary conditions is presented and applied to the optimization of material distributions in thermoelectric converters. Multistaged devices are considered within the same formalism by inclusion of position-dependent electric current in the legs leading to a modified thermoelectric equation. Previous analytical solutions for maximized efficiencies for generators and coolers obtained by Sherman [J. Appl. Phys. 31, 1 (1960)], Snyder [Phys. Rev. B 86, 045202 (2012)], and Seifert et al. [Phys. Status Solidi A 207, 760 (2010)] by a method of local optimization of reduced efficiencies are recovered by independent proof. The outstanding maximization problems for generated electric power and cooling power can be solved swiftly numerically by solution of a differential equation-system obtained within the new formalism. As far as suitable materials are available, the inhomogeneous TE converters can have increased performance by use of purely temperature-dependent material properties in the thermoelectric legs or by use of purely spatial variation of material properties or by a combination of both. It turns out that the optimization domain is larger for the second kind of device which can, thus, outperform the first kind of device.

  12. Finite Element Flow Code Optimization on the Cray T3D,

    DTIC Science & Technology

    1997-04-01

    present time, the system is configured with 512 processing elements and 32.8 Cigabytes of memory. Through a gift of time from MSCI and other arrangements, the AHPCRC has limited access to this system.

  13. A minimization principle for the description of modes associated with finite-time instabilities

    PubMed Central

    Babaee, H.

    2016-01-01

    We introduce a minimization formulation for the determination of a finite-dimensional, time-dependent, orthonormal basis that captures directions of the phase space associated with transient instabilities. While these instabilities have finite lifetime, they can play a crucial role either by altering the system dynamics through the activation of other instabilities or by creating sudden nonlinear energy transfers that lead to extreme responses. However, their essentially transient character makes their description a particularly challenging task. We develop a minimization framework that focuses on the optimal approximation of the system dynamics in the neighbourhood of the system state. This minimization formulation results in differential equations that evolve a time-dependent basis so that it optimally approximates the most unstable directions. We demonstrate the capability of the method for two families of problems: (i) linear systems, including the advection–diffusion operator in a strongly non-normal regime as well as the Orr–Sommerfeld/Squire operator, and (ii) nonlinear problems, including a low-dimensional system with transient instabilities and the vertical jet in cross-flow. We demonstrate that the time-dependent subspace captures the strongly transient non-normal energy growth (in the short-time regime), while for longer times the modes capture the expected asymptotic behaviour. PMID:27118900

  14. Topology optimization of finite strain viscoplastic systems under transient loads [Dynamic topology optimization based on finite strain visco-plasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivarsson, Niklas; Wallin, Mathias; Tortorelli, Daniel

    In this paper, a transient finite strain viscoplastic model is implemented in a gradient-based topology optimization framework to design impact mitigating structures. The model's kinematics relies on the multiplicative split of the deformation gradient, and the constitutive response is based on isotropic hardening viscoplasticity. To solve the mechanical balance laws, the implicit Newmark-beta method is used together with a total Lagrangian finite element formulation. The optimization problem is regularized using a partial differential equation filter and solved using the method of moving asymptotes. Sensitivities required to solve the optimization problem are derived using the adjoint method. To demonstrate the capabilitymore » of the algorithm, several protective systems are designed, in which the absorbed viscoplastic energy is maximized. Finally, the numerical examples demonstrate that transient finite strain viscoplastic effects can successfully be combined with topology optimization.« less

  15. Topology optimization of finite strain viscoplastic systems under transient loads [Dynamic topology optimization based on finite strain visco-plasticity

    DOE PAGES

    Ivarsson, Niklas; Wallin, Mathias; Tortorelli, Daniel

    2018-02-08

    In this paper, a transient finite strain viscoplastic model is implemented in a gradient-based topology optimization framework to design impact mitigating structures. The model's kinematics relies on the multiplicative split of the deformation gradient, and the constitutive response is based on isotropic hardening viscoplasticity. To solve the mechanical balance laws, the implicit Newmark-beta method is used together with a total Lagrangian finite element formulation. The optimization problem is regularized using a partial differential equation filter and solved using the method of moving asymptotes. Sensitivities required to solve the optimization problem are derived using the adjoint method. To demonstrate the capabilitymore » of the algorithm, several protective systems are designed, in which the absorbed viscoplastic energy is maximized. Finally, the numerical examples demonstrate that transient finite strain viscoplastic effects can successfully be combined with topology optimization.« less

  16. Simulation of multistage turbine flows

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Mulac, Richard A.

    1987-01-01

    A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.

  17. FDTD simulation of EM wave propagation in 3-D media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T.; Tripp, A.C.

    1996-01-01

    A finite-difference, time-domain solution to Maxwell`s equations has been developed for simulating electromagnetic wave propagation in 3-D media. The algorithm allows arbitrary electrical conductivity and permittivity variations within a model. The staggered grid technique of Yee is used to sample the fields. A new optimized second-order difference scheme is designed to approximate the spatial derivatives. Like the conventional fourth-order difference scheme, the optimized second-order scheme needs four discrete values to calculate a single derivative. However, the optimized scheme is accurate over a wider wavenumber range. Compared to the fourth-order scheme, the optimized scheme imposes stricter limitations on the time stepmore » sizes but allows coarser grids. The net effect is that the optimized scheme is more efficient in terms of computation time and memory requirement than the fourth-order scheme. The temporal derivatives are approximated by second-order central differences throughout. The Liao transmitting boundary conditions are used to truncate an open problem. A reflection coefficient analysis shows that this transmitting boundary condition works very well. However, it is subject to instability. A method that can be easily implemented is proposed to stabilize the boundary condition. The finite-difference solution is compared to closed-form solutions for conducting and nonconducting whole spaces and to an integral-equation solution for a 3-D body in a homogeneous half-space. In all cases, the finite-difference solutions are in good agreement with the other solutions. Finally, the use of the algorithm is demonstrated with a 3-D model. Numerical results show that both the magnetic field response and electric field response can be useful for shallow-depth and small-scale investigations.« less

  18. Verification and rectification of the physical analogy of simulated annealing for the solution of the traveling salesman problem.

    PubMed

    Hasegawa, M

    2011-03-01

    The aim of the present study is to elucidate how simulated annealing (SA) works in its finite-time implementation by starting from the verification of its conventional optimization scenario based on equilibrium statistical mechanics. Two and one supplementary experiments, the design of which is inspired by concepts and methods developed for studies on liquid and glass, are performed on two types of random traveling salesman problems. In the first experiment, a newly parameterized temperature schedule is introduced to simulate a quasistatic process along the scenario and a parametric study is conducted to investigate the optimization characteristics of this adaptive cooling. In the second experiment, the search trajectory of the Metropolis algorithm (constant-temperature SA) is analyzed in the landscape paradigm in the hope of drawing a precise physical analogy by comparison with the corresponding dynamics of glass-forming molecular systems. These two experiments indicate that the effectiveness of finite-time SA comes not from equilibrium sampling at low temperature but from downward interbasin dynamics occurring before equilibrium. These dynamics work most effectively at an intermediate temperature varying with the total search time and thus this effective temperature is identified using the Deborah number. To test directly the role of these relaxation dynamics in the process of cooling, a supplementary experiment is performed using another parameterized temperature schedule with a piecewise variable cooling rate and the effect of this biased cooling is examined systematically. The results show that the optimization performance is not only dependent on but also sensitive to cooling in the vicinity of the above effec-tive temperature and that this feature is interpreted as a consequence of the presence or absence of the workable interbasin dynamics. It is confirmed for the present instances that the effectiveness of finite-time SA derives from the glassy relaxation dynamics occurring in the "landscape-influenced" temperature regime and that its naive optimization scenario should be rectified by considering the analogy with vitrification phenomena. A comprehensive guideline for the design of finite-time SA and SA-related algorithms is discussed on the basis of this rectified analogy.

  19. Stabilized Finite Elements in FUN3D

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Newman, James C.; Karman, Steve L.

    2017-01-01

    A Streamlined Upwind Petrov-Galerkin (SUPG) stabilized finite-element discretization has been implemented as a library into the FUN3D unstructured-grid flow solver. Motivation for the selection of this methodology is given, details of the implementation are provided, and the discretization for the interior scheme is verified for linear and quadratic elements by using the method of manufactured solutions. A methodology is also described for capturing shocks, and simulation results are compared to the finite-volume formulation that is currently the primary method employed for routine engineering applications. The finite-element methodology is demonstrated to be more accurate than the finite-volume technology, particularly on tetrahedral meshes where the solutions obtained using the finite-volume scheme can suffer from adverse effects caused by bias in the grid. Although no effort has been made to date to optimize computational efficiency, the finite-element scheme is competitive with the finite-volume scheme in terms of computer time to reach convergence.

  20. Global linear-irreversible principle for optimization in finite-time thermodynamics

    NASA Astrophysics Data System (ADS)

    Johal, Ramandeep S.

    2018-03-01

    There is intense effort into understanding the universal properties of finite-time models of thermal machines —at optimal performance— such as efficiency at maximum power, coefficient of performance at maximum cooling power, and other such criteria. In this letter, a global principle consistent with linear irreversible thermodynamics is proposed for the whole cycle —without considering details of irreversibilities in the individual steps of the cycle. This helps to express the total duration of the cycle as τ \\propto {\\bar{Q}^2}/{Δ_\\text{tot}S} , where \\bar{Q} models the effective heat transferred through the machine during the cycle, and Δ_ \\text{tot} S is the total entropy generated. By taking \\bar{Q} in the form of simple algebraic means (such as arithmetic and geometric means) over the heats exchanged by the reservoirs, the present approach is able to predict various standard expressions for figures of merit at optimal performance, as well as the bounds respected by them. It simplifies the optimization procedure to a one-parameter optimization, and provides a fresh perspective on the issue of universality at optimal performance, for small difference in reservoir temperatures. As an illustration, we compare the performance of a partially optimized four-step endoreversible cycle with the present approach.

  1. Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints

    NASA Astrophysics Data System (ADS)

    Prince, E. R.; Carr, R. W.; Cobb, R. G.

    This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.

  2. BBPH: Using progressive hedging within branch and bound to solve multi-stage stochastic mixed integer programs

    DOE PAGES

    Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.

    2016-11-27

    Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.

  3. An MILP-based cross-layer optimization for a multi-reader arbitration in the UHF RFID system.

    PubMed

    Choi, Jinchul; Lee, Chaewoo

    2011-01-01

    In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layer design optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design.

  4. An MILP-Based Cross-Layer Optimization for a Multi-Reader Arbitration in the UHF RFID System

    PubMed Central

    Choi, Jinchul; Lee, Chaewoo

    2011-01-01

    In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layer design optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design. PMID:22163743

  5. Particle swarm optimization of ascent trajectories of multistage launch vehicles

    NASA Astrophysics Data System (ADS)

    Pontani, Mauro

    2014-02-01

    Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.

  6. Closed-form recursive formula for an optimal tracker with terminal constraints

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Turner, J. D.; Chun, H. M.

    1986-01-01

    Feedback control laws are derived for a class of optimal finite time tracking problems with terminal constraints. Analytical solutions are obtained for the feedback gain and the closed-loop response trajectory. Such formulations are expressed in recursive forms so that a real-time computer implementation becomes feasible. An example involving the feedback slewing of a flexible spacecraft is given to illustrate the validity and usefulness of the formulations.

  7. Acoustic reverse-time migration using GPU card and POSIX thread based on the adaptive optimal finite-difference scheme and the hybrid absorbing boundary condition

    NASA Astrophysics Data System (ADS)

    Cai, Xiaohui; Liu, Yang; Ren, Zhiming

    2018-06-01

    Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.

  8. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  9. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  10. Multi-Stage System for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver

    2010-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.

  11. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  12. Optimizing Nanoscale Quantitative Optical Imaging of Subfield Scattering Targets

    PubMed Central

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui; Sohn, Martin; Silver, Richard M.

    2016-01-01

    The full 3-D scattered field above finite sets of features has been shown to contain a continuum of spatial frequency information, and with novel optical microscopy techniques and electromagnetic modeling, deep-subwavelength geometrical parameters can be determined. Similarly, by using simulations, scattering geometries and experimental conditions can be established to tailor scattered fields that yield lower parametric uncertainties while decreasing the number of measurements and the area of such finite sets of features. Such optimized conditions are reported through quantitative optical imaging in 193 nm scatterfield microscopy using feature sets up to four times smaller in area than state-of-the-art critical dimension targets. PMID:27805660

  13. Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2002-01-01

    Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.

  14. Nitrogen removal performance and microbial community of an enhanced multistage A/O biofilm reactor treating low-strength domestic wastewater.

    PubMed

    Chen, Han; Li, Ang; Wang, Qiao; Cui, Di; Cui, Chongwei; Ma, Fang

    2018-06-01

    The low-strength domestic wastewater (LSDW) treatment with low chemical oxygen demand (COD) has drawn extensive attention for the poor total nitrogen (TN) removal performance. In the present study, an enhanced multistage anoxic/oxic (A/O) biofilm reactor was designed to improve the TN removal performance of the LSDW treatment. Efficient nitrifying and denitrifying biofilm carriers were cultivated and then filled into the enhanced biofilm reactor as the sole microbial source. Step-feed strategy and internal recycle were adopted to optimize the substrate distribution and the organics utilization. Key operational parameters were optimized to obtain the best nitrogen and organics removal efficiencies. A hydraulic retention time of 8 h, an influent distribution ratio of 2:1 and an internal recycle ratio of 200% were tested as the optimum parameters. The ammonium, TN and COD removal efficiencies under the optimal operational parameters separately achieved 99.75 ± 0.21, 59.51 ± 1.95 and 85.06 ± 0.79% with an organic loading rate at around 0.36 kg COD/m 3  d. The high-throughput sequencing technology confirmed that nitrifying and denitrifying biofilm could maintain functional bacteria in the system during long-period operation. Proteobacteria and Bacteroidetes were the dominant phyla in all the nitrifying and denitrifying biofilm samples. Nitrosomonadaceae_uncultured and Nitrospira sp. stably existed in nitrifying biofilm as the main nitrifiers, while several heterotrophic genera, such as Thauera sp. and Flavobacterium sp., acted as potential genera responsible for TN removal in denitrifying biofilm. These findings suggested that the enhanced biofilm reactor could be a promising route for the treatment of LSDW with a low COD level.

  15. Multiple-stage decisions in a marine central-place forager

    NASA Astrophysics Data System (ADS)

    Friedlaender, Ari S.; Johnston, David W.; Tyson, Reny B.; Kaltenberg, Amanda; Goldbogen, Jeremy A.; Stimpert, Alison K.; Curtice, Corrie; Hazen, Elliott L.; Halpin, Patrick N.; Read, Andrew J.; Nowacek, Douglas P.

    2016-05-01

    Air-breathing marine animals face a complex set of physical challenges associated with diving that affect the decisions of how to optimize feeding. Baleen whales (Mysticeti) have evolved bulk-filter feeding mechanisms to efficiently feed on dense prey patches. Baleen whales are central place foragers where oxygen at the surface represents the central place and depth acts as the distance to prey. Although hypothesized that baleen whales will target the densest prey patches anywhere in the water column, how depth and density interact to influence foraging behaviour is poorly understood. We used multi-sensor archival tags and active acoustics to quantify Antarctic humpback whale foraging behaviour relative to prey. Our analyses reveal multi-stage foraging decisions driven by both krill depth and density. During daylight hours when whales did not feed, krill were found in deep high-density patches. As krill migrated vertically into larger and less dense patches near the surface, whales began to forage. During foraging bouts, we found that feeding rates (number of feeding lunges per hour) were greatest when prey was shallowest, and feeding rates decreased with increasing dive depth. This strategy is consistent with previous models of how air-breathing diving animals optimize foraging efficiency. Thus, humpback whales forage mainly when prey is more broadly distributed and shallower, presumably to minimize diving and searching costs and to increase feeding rates overall and thus foraging efficiency. Using direct measurements of feeding behaviour from animal-borne tags and prey availability from echosounders, our study demonstrates a multi-stage foraging process in a central place forager that we suggest acts to optimize overall efficiency by maximizing net energy gain over time. These data reveal a previously unrecognized level of complexity in predator-prey interactions and underscores the need to simultaneously measure prey distribution in marine central place forager studies.

  16. Multiple-stage decisions in a marine central-place forager.

    PubMed

    Friedlaender, Ari S; Johnston, David W; Tyson, Reny B; Kaltenberg, Amanda; Goldbogen, Jeremy A; Stimpert, Alison K; Curtice, Corrie; Hazen, Elliott L; Halpin, Patrick N; Read, Andrew J; Nowacek, Douglas P

    2016-05-01

    Air-breathing marine animals face a complex set of physical challenges associated with diving that affect the decisions of how to optimize feeding. Baleen whales (Mysticeti) have evolved bulk-filter feeding mechanisms to efficiently feed on dense prey patches. Baleen whales are central place foragers where oxygen at the surface represents the central place and depth acts as the distance to prey. Although hypothesized that baleen whales will target the densest prey patches anywhere in the water column, how depth and density interact to influence foraging behaviour is poorly understood. We used multi-sensor archival tags and active acoustics to quantify Antarctic humpback whale foraging behaviour relative to prey. Our analyses reveal multi-stage foraging decisions driven by both krill depth and density. During daylight hours when whales did not feed, krill were found in deep high-density patches. As krill migrated vertically into larger and less dense patches near the surface, whales began to forage. During foraging bouts, we found that feeding rates (number of feeding lunges per hour) were greatest when prey was shallowest, and feeding rates decreased with increasing dive depth. This strategy is consistent with previous models of how air-breathing diving animals optimize foraging efficiency. Thus, humpback whales forage mainly when prey is more broadly distributed and shallower, presumably to minimize diving and searching costs and to increase feeding rates overall and thus foraging efficiency. Using direct measurements of feeding behaviour from animal-borne tags and prey availability from echosounders, our study demonstrates a multi-stage foraging process in a central place forager that we suggest acts to optimize overall efficiency by maximizing net energy gain over time. These data reveal a previously unrecognized level of complexity in predator-prey interactions and underscores the need to simultaneously measure prey distribution in marine central place forager studies.

  17. Optimal control of lift/drag ratios on a rotating cylinder

    NASA Technical Reports Server (NTRS)

    Ou, Yuh-Roung; Burns, John A.

    1992-01-01

    We present the numerical solution to a problem of maximizing the lift to drag ratio by rotating a circular cylinder in a two-dimensional viscous incompressible flow. This problem is viewed as a test case for the newly developing theoretical and computational methods for control of fluid dynamic systems. We show that the time averaged lift to drag ratio for a fixed finite-time interval achieves its maximum value at an optimal rotation rate that depends on the time interval.

  18. Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Adamian, A.

    1988-01-01

    An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.

  19. First demonstration of an emulsion multi-stage shifter for accelerator neutrino experiments in J-PARC T60

    NASA Astrophysics Data System (ADS)

    Yamada, K.; Aoki, S.; Cao, S.; Chikuma, N.; Fukuda, T.; Fukuzawa, Y.; Gonin, M.; Hayashino, T.; Hayato, Y.; Hiramoto, A.; Hosomi, F.; Inoh, T.; Iori, S.; Ishiguro, K.; Kawahara, H.; Kim, H.; Kitagawa, N.; Koga, T.; Komatani, R.; Komatsu, M.; Matsushita, A.; Mikado, S.; Minamino, A.; Mizusawa, H.; Matsumoto, T.; Matsuo, T.; Morimoto, Y.; Morishima, K.; Morishita, M.; Naganawa, N.; Nakamura, K.; Nakamura, M.; Nakamura, Y.; Nakano, T.; Nakatsuka, Y.; Nakaya, T.; Nishio, A.; Ogawa, S.; Oshima, H.; Quilain, B.; Rokujo, H.; Sato, O.; Seiya, Y.; Shibuya, H.; Shiraishi, T.; Suzuki, Y.; Tada, S.; Takahashi, S.; Yokoyama, M.; Yoshimoto, M.

    2017-06-01

    We describe the first ever implementation of a clock-based, multi-stage emulsion shifter in an accelerator neutrino experiment. The system was installed in the neutrino monitoring building at the Japan Proton Accelerator Research Complex as part of a test experiment, T60, and stable operation was maintained for a total of 126.6 days. By applying time information to emulsion films, various results were obtained. Time resolutions of 5.3-14.7 s were evaluated in an operation spanning 46.9 days (yielding division numbers of 1.4-3.8×105). By using timing and spatial information, reconstruction of coincident events consisting of high-multiplicity and vertex-contained events, including neutrino events, was performed. Emulsion events were matched to events observed by INGRID, one of the on-axis near detectors of the T2K experiment, with high reliability (98.5%), and hybrid analysis of the emulsion and INGRID events was established by means of the multi-stage shifter. The results demonstrate that the multi-stage shifter can feasibly be used in neutrino experiments.

  20. Optimal vibration control of a rotating plate with self-sensing active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng

    2012-04-01

    This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.

  1. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  2. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    NASA Astrophysics Data System (ADS)

    Chen, Xudong

    2010-07-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.

  3. Multi-stage volcanic island flank collapses with coeval explosive caldera-forming eruptions.

    PubMed

    Hunt, James E; Cassidy, Michael; Talling, Peter J

    2018-01-18

    Volcanic flank collapses and explosive eruptions are among the largest and most destructive processes on Earth. Events at Mount St. Helens in May 1980 demonstrated how a relatively small (<5 km 3 ) flank collapse on a terrestrial volcano could immediately precede a devastating eruption. The lateral collapse of volcanic island flanks, such as in the Canary Islands, can be far larger (>300 km 3 ), but can also occur in complex multiple stages. Here, we show that multistage retrogressive landslides on Tenerife triggered explosive caldera-forming eruptions, including the Diego Hernandez, Guajara and Ucanca caldera eruptions. Geochemical analyses were performed on volcanic glasses recovered from marine sedimentary deposits, called turbidites, associated with each individual stage of each multistage landslide. These analyses indicate only the lattermost stages of subaerial flank failure contain materials originating from respective coeval explosive eruption, suggesting that initial more voluminous submarine stages of multi-stage flank collapse induce these aforementioned explosive eruption. Furthermore, there are extended time lags identified between the individual stages of multi-stage collapse, and thus an extended time lag between the initial submarine stages of failure and the onset of subsequent explosive eruption. This time lag succeeding landslide-generated static decompression has implications for the response of magmatic systems to un-roofing and poses a significant implication for ocean island volcanism and civil emergency planning.

  4. Multi-megavolt low jitter multistage switch

    DOEpatents

    Humphreys, D.R.; Penn, K.J. Jr.

    1985-06-19

    It is one object of the present invention to provide a multistage switch capable of holding off numerous megavolts, until triggered, from a particle beam accelerator of the type used for inertial confinement fusion. The invention provides a multistage switch having low timing jitter and capable of producing multiple spark channels for spreading current over a wider area to reduce electrode damage and increase switch lifetime. The switch has fairly uniform electric fields and a short spark gap for laser triggering and is engineered to prevent insulator breakdowns.

  5. The integration of manual and automatic image analysis techniques with supporting ground data in a multistage sampling framework for timber resource inventories: Three examples

    NASA Technical Reports Server (NTRS)

    Gialdini, M.; Titus, S. J.; Nichols, J. D.; Thomas, R.

    1975-01-01

    An approach to information acquisition is discussed in the context of meeting user-specified needs in a cost-effective, timely manner through the use of remote sensing data, ground data, and multistage sampling techniques. The roles of both LANDSAT imagery and Skylab photography are discussed as first stages of three separate multistage timber inventory systems and results are given for each system. Emphasis is placed on accuracy and meeting user needs.

  6. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  7. Optimized Finite-Difference Coefficients for Hydroacoustic Modeling

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2014-12-01

    Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less

  9. Multi-Stage Convex Relaxation Methods for Machine Learning

    DTIC Science & Technology

    2013-03-01

    Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.

  10. Practical synchronization on complex dynamical networks via optimal pinning control

    NASA Astrophysics Data System (ADS)

    Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu

    2015-07-01

    We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.

  11. Optimized emission in nanorod arrays through quasi-aperiodic inverse design.

    PubMed

    Anderson, P Duke; Povinelli, Michelle L

    2015-06-01

    We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.

  12. Optimization of block-floating-point realizations for digital controllers with finite-word-length considerations.

    PubMed

    Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian

    2003-01-01

    The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.

  13. A New Numerical Simulation technology of Multistage Fracturing in Horizontal Well

    NASA Astrophysics Data System (ADS)

    Cheng, Ning; Kang, Kaifeng; Li, Jianming; Liu, Tao; Ding, Kun

    2017-11-01

    Horizontal multi-stage fracturing is recognized the effective development technology of unconventional oil resources. Geological mechanics in the numerical simulation of hydraulic fracturing technology occupies very important position, compared with the conventional numerical simulation technology, because of considering the influence of geological mechanics. New numerical simulation of hydraulic fracturing can more effectively optimize the design of fracturing and evaluate the production after fracturing. This paper studies is based on the three-dimensional stress and rock physics parameters model, using the latest fluid-solid coupling numerical simulation technology to engrave the extension process of fracture and describes the change of stress field in fracturing process, finally predict the production situation.

  14. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  15. Optimal Transient Growth of Submesoscale Baroclinic Instabilities

    NASA Astrophysics Data System (ADS)

    White, Brian; Zemskova, Varvara; Passaggia, Pierre-Yves

    2016-11-01

    Submesoscale instabilities are analyzed using a transient growth approach to determine the optimal perturbation for a rotating Boussinesq fluid subject to baroclinic instabilities. We consider a base flow with uniform shear and stratification and consider the non-normal evolution over finite-time horizons of linear perturbations in an ageostrophic, non-hydrostatic regime. Stone (1966, 1971) showed that the stability of the base flow to normal modes depends on the Rossby and Richardson numbers, with instabilities ranging from geostrophic (Ro -> 0) and ageostrophic (finite Ro) baroclinic modes to symmetric (Ri < 1 , Ro > 1) and Kelvin-Helmholtz (Ri < 1 / 4) modes. Non-normal transient growth, initiated by localized optimal wave packets, represents a faster mechanism for the growth of perturbations and may provide an energetic link between large-scale flows in geostrophic balance and dissipation scales via submesoscale instabilities. Here we consider two- and three-dimensional optimal perturbations by means of direct-adjoint iterations of the linearized Boussinesq Navier-Stokes equations to determine the form of the optimal perturbation, the optimal energy gain, and the characteristics of the most unstable perturbation.

  16. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  17. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  18. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  19. Investigation of multi-stage cold forward extrusion process using coupled thermo-mechanical finite element analysis

    NASA Astrophysics Data System (ADS)

    Görtan, Mehmet Okan

    2018-05-01

    Cold extrusion processes are distinguished by their low material usage as well as great efficiency in the production of mid-range and large component series. Although majority of the cold extruded parts are produced using die systems containing multiple forming stages, this subject has rarely been investigated so far. Therefore, the characteristics of multi-stage cold forward rod extrusion is studied in the current work using thermo-mechanically coupled finite element (FE) analysis. A case hardening steel, 16MnCr5 (1.7131) was used as experimental material. Its strain, strain rate and temperature dependent mechanical characteristics were determined using compression testing and modeled in FE simulations via a Johnson-Cook material model. Friction coefficients for the same material while in contact with a tool steel (1.2379) were determined dependent on temperature and contact pressure using sliding compression test (SCT) and modeled by an adaptive friction model developed by the author. In the first set of simulations, rod material with a diameter of 14.9 mm was extruded down to a diameter of 9.6 mm in a single step using three different die opening angles (2α); 20°, 40° and 60°. In the second set of investigations, the same rod was reduced first to 12 mm and then to 9.6 mm in two steps within the same forming die. Press forces, contact normal stresses between extruded material and forming die, material temperature and axial stresses are compared in these two set of simulations and the differences are discussed.

  20. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  1. Topology optimization of finite strain viscoplastic systems under transient loads

    DOE PAGES

    Ivarsson, Niklas; Wallin, Mathias; Tortorelli, Daniel

    2018-02-08

    In this paper, a transient finite strain viscoplastic model is implemented in a gradient-based topology optimization framework to design impact mitigating structures. The model's kinematics relies on the multiplicative split of the deformation gradient, and the constitutive response is based on isotropic hardening viscoplasticity. To solve the mechanical balance laws, the implicit Newmark-beta method is used together with a total Lagrangian finite element formulation. The optimization problem is regularized using a partial differential equation filter and solved using the method of moving asymptotes. Sensitivities required to solve the optimization problem are derived using the adjoint method. To demonstrate the capabilitymore » of the algorithm, several protective systems are designed, in which the absorbed viscoplastic energy is maximized. Finally, the numerical examples demonstrate that transient finite strain viscoplastic effects can successfully be combined with topology optimization.« less

  2. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1989-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.

  3. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1992-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.

  4. Continuous Mass Measurement on Conveyor Belt

    NASA Astrophysics Data System (ADS)

    Tomobe, Yuki; Tasaki, Ryosuke; Yamazaki, Takanori; Ohnishi, Hideo; Kobayashi, Masaaki; Kurosu, Shigeru

    The continuous mass measurement of packages on a conveyor belt will become greatly important. In the mass measurement, the sequence of products is generally random. An interesting possibility of raising throughput of the conveyor line without increasing the conveyor belt speed is offered by the use of two or three conveyor belt scales (called a multi-stage conveyor belt scale). The multi-stage conveyor belt scale can be created which will adjust the conveyor belt length to the product length. The conveyor belt scale usually has maximum capacities of less than 80kg and 140cm, and achieves measuring rates of more than 150 packages per minute and more. The output signals from the conveyor belt scale are always contaminated with noises due to vibrations of the conveyor and the product to be measured in motion. In this paper an employed digital filter is of Finite Impulse Response (FIR) type designed under the consideration on the dynamics of the conveyor system. The experimental results on the conveyor belt scale suggest that the filtering algorithms are effective enough to practical applications to some extent.

  5. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  6. Finite element solution of optimal control problems with inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1990-01-01

    A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.

  7. Optimal throughput for cognitive radio with energy harvesting in fading wireless channel.

    PubMed

    Vu-Van, Hiep; Koo, Insoo

    2014-01-01

    Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision.

  8. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  9. Infinite horizon optimal impulsive control with applications to Internet congestion control

    NASA Astrophysics Data System (ADS)

    Avrachenkov, Konstantin; Habachi, Oussama; Piunovskiy, Alexey; Zhang, Yi

    2015-04-01

    We investigate infinite-horizon deterministic optimal control problems with both gradual and impulsive controls, where any finitely many impulses are allowed simultaneously. Both discounted and long-run time-average criteria are considered. We establish very general and at the same time natural conditions, under which the dynamic programming approach results in an optimal feedback policy. The established theoretical results are applied to the Internet congestion control, and by solving analytically and nontrivially the underlying optimal control problems, we obtain a simple threshold-based active queue management scheme, which takes into account the main parameters of the transmission control protocols, and improves the fairness among the connections in a given network.

  10. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-03-21

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  12. Fast cooling for a system of stochastic oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yongxin, E-mail: chen2468@umn.edu; Georgiou, Tryphon T., E-mail: tryphon@umn.edu; Pavon, Michele, E-mail: pavon@math.unipd.it

    2015-11-15

    We study feedback control of coupled nonlinear stochastic oscillators in a force field. We first consider the problem of asymptotically driving the system to a desired steady state corresponding to reduced thermal noise. Among the feedback controls achieving the desired asymptotic transfer, we find that the most efficient one from an energy point of view is characterized by time-reversibility. We also extend the theory of Schrödinger bridges to this model, thereby steering the system in finite time and with minimum effort to a target steady-state distribution. The system can then be maintained in this state through the optimal steady-state feedbackmore » control. The solution, in the finite-horizon case, involves a space-time harmonic function φ, and −logφ plays the role of an artificial, time-varying potential in which the desired evolution occurs. This framework appears extremely general and flexible and can be viewed as a considerable generalization of existing active control strategies such as macromolecular cooling. In the case of a quadratic potential, the results assume a form particularly attractive from the algorithmic viewpoint as the optimal control can be computed via deterministic matricial differential equations. An example involving inertial particles illustrates both transient and steady state optimal feedback control.« less

  13. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  14. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  15. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  16. Turbulence and mixing from optimal perturbations to a stratified shear layer

    NASA Astrophysics Data System (ADS)

    Kaminski, Alexis; Caulfield, C. P.; Taylor, John

    2014-11-01

    The stability and mixing of stratified shear layers is a canonical problem in fluid dynamics with relevance to flows in the ocean and atmosphere. The Miles-Howard theorem states that a necessary condition for normal-mode instability in parallel, inviscid, steady stratified shear flows is that the gradient Richardson number, Rig is less than 1/4 somewhere in the flow. However, substantial transient growth of non-normal modes may be possible at finite times even when Rig > 1 / 4 everywhere in the flow. We have calculated the ``optimal perturbations'' associated with maximum perturbation energy gain for a stably-stratified shear layer. These optimal perturbations are then used to initialize direct numerical simulations. For small but finite perturbation amplitudes, the optimal perturbations grow at the predicted linear rate initially, but then experience sufficient transient growth to become nonlinear and susceptible to secondary instabilities, which then break down into turbulence. Remarkably, this occurs even in flows for which Rig > 1 / 4 everywhere. We will describe the nonlinear evolution of the optimal perturbations and characterize the resulting turbulence and mixing.

  17. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  18. Finite element approximation of an optimal control problem for the von Karman equations

    NASA Technical Reports Server (NTRS)

    Hou, L. Steven; Turner, James C.

    1994-01-01

    This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.

  19. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  20. Improving adsorption cryocoolers by multi-stage compression and reducing void volume

    NASA Technical Reports Server (NTRS)

    Bard, S.

    1986-01-01

    It is shown that the performance of gas adsorption cryocoolers is greatly improved by using adsorbents with low void volume within and between individual adsorbent particles (reducing void volumes in plumbing lines), and by compressing the working fluid in more than one stage. Refrigerator specific power requirements and compressor volumetric efficiencies are obtained in terms of adsorbent and plumbing line void volumes and operating pressures for various charcoal adsorbents using an analytical model. Performance optimization curves for 117.5 and 80 K charcoal/nitrogen adsorption cryocoolers are given for both single stage and multistage compressor systems, and compressing the nitrogen in two stages is shown to lower the specific power requirements by 18 percent for the 117.5 K system.

  1. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  2. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  3. Aerodynamic Analysis of Multistage Turbomachinery Flows in Support of Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.

    1999-01-01

    This paper summarizes the state of 3D CFD based models of the time average flow field within axial flow multistage turbomachines. Emphasis is placed on models which are compatible with the industrial design environment and those models which offer the potential of providing credible results at both design and off-design operating conditions. The need to develop models which are free of aerodynamic input from semi-empirical design systems is stressed. The accuracy of such models is shown to be dependent upon their ability to account for the unsteady flow environment in multistage turbomachinery. The relevant flow physics associated with some of the unsteady flow processes present in axial flow multistage machinery are presented along with procedures which can be used to account for them in 3D CFD simulations. Sample results are presented for both axial flow compressors and axial flow turbines which help to illustrate the enhanced predictive capabilities afforded by including these procedures in 3D CFD simulations. Finally, suggestions are given for future work on the development of time average flow models.

  4. Double absorbing boundaries for finite-difference time-domain electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu

    We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.

  5. Online Build-Order Optimization for Real-Time Strategy Agents using Multi-Objective Evolutionary Algorithms

    DTIC Science & Technology

    2014-03-27

    Their chromosome representation is a binary string of 13 actions or 39 bits. Plans consist of a limited number of build actions for the creation of...injected via case-injection which resembles case-base reasoning. Expert actions are recorded and then transformed into chromosomes for injection into GAPs...sites supply a finite amount of a resource. For example, a gold mine in AOE will disappear after a player’s workers have extracted the finite amount of

  6. Finite element analysis and genetic algorithm optimization design for the actuator placement on a large adaptive structure

    NASA Astrophysics Data System (ADS)

    Sheng, Lizeng

    The dissertation focuses on one of the major research needs in the area of adaptive/intelligent/smart structures, the development and application of finite element analysis and genetic algorithms for optimal design of large-scale adaptive structures. We first review some basic concepts in finite element method and genetic algorithms, along with the research on smart structures. Then we propose a solution methodology for solving a critical problem in the design of a next generation of large-scale adaptive structures---optimal placements of a large number of actuators to control thermal deformations. After briefly reviewing the three most frequently used general approaches to derive a finite element formulation, the dissertation presents techniques associated with general shell finite element analysis using flat triangular laminated composite elements. The element used here has three nodes and eighteen degrees of freedom and is obtained by combining a triangular membrane element and a triangular plate bending element. The element includes the coupling effect between membrane deformation and bending deformation. The membrane element is derived from the linear strain triangular element using Cook's transformation. The discrete Kirchhoff triangular (DKT) element is used as the plate bending element. For completeness, a complete derivation of the DKT is presented. Geometrically nonlinear finite element formulation is derived for the analysis of adaptive structures under the combined thermal and electrical loads. Next, we solve the optimization problems of placing a large number of piezoelectric actuators to control thermal distortions in a large mirror in the presence of four different thermal loads. We then extend this to a multi-objective optimization problem of determining only one set of piezoelectric actuator locations that can be used to control the deformation in the same mirror under the action of any one of the four thermal loads. A series of genetic algorithms, GA Version 1, 2 and 3, were developed to find the optimal locations of piezoelectric actuators from the order of 1021 ˜ 1056 candidate placements. Introducing a variable population approach, we improve the flexibility of selection operation in genetic algorithms. Incorporating mutation and hill climbing into micro-genetic algorithms, we are able to develop a more efficient genetic algorithm. Through extensive numerical experiments, we find that the design search space for the optimal placements of a large number of actuators is highly multi-modal and that the most distinct nature of genetic algorithms is their robustness. They give results that are random but with only a slight variability. The genetic algorithms can be used to get adequate solution using a limited number of evaluations. To get the highest quality solution, multiple runs including different random seed generators are necessary. The investigation time can be significantly reduced using a very coarse grain parallel computing. Overall, the methodology of using finite element analysis and genetic algorithm optimization provides a robust solution approach for the challenging problem of optimal placements of a large number of actuators in the design of next generation of adaptive structures.

  7. A dynamic model of functioning of a bank

    NASA Astrophysics Data System (ADS)

    Malafeyev, Oleg; Awasthi, Achal; Zaitseva, Irina; Rezenkov, Denis; Bogdanova, Svetlana

    2018-04-01

    In this paper, we analyze dynamic programming as a novel approach to solve the problem of maximizing the profits of a bank. The mathematical model of the problem and the description of bank's work is described in this paper. The problem is then approached using the method of dynamic programming. Dynamic programming makes sure that the solutions obtained are globally optimal and numerically stable. The optimization process is set up as a discrete multi-stage decision process and solved with the help of dynamic programming.

  8. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  9. Computational methods for optimal linear-quadratic compensators for infinite dimensional discrete-time systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.

  10. Near-optimal, asymptotic tracking in control problems involving state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Markopoulos, N.; Calise, A. J.

    1993-01-01

    The class of all piecewise time-continuous controllers tracking a given hypersurface in the state space of a dynamical system can be split by the present transformation technique into two disjoint classes; while the first of these contains all controllers which track the hypersurface in finite time, the second contains all controllers that track the hypersurface asymptotically. On this basis, a reformulation is presented for optimal control problems involving state-variable inequality constraints. If the state constraint is regarded as 'soft', there may exist controllers which are asymptotic, two-sided, and able to yield the optimal value of the performance index.

  11. Localized Overheating Phenomena and Optimization of Spark-Plasma Sintering Tooling Design

    PubMed Central

    Giuntini, Diletta; Olevsky, Eugene A.; Garcia-Cardona, Cristina; Maximenko, Andrey L.; Yurlova, Maria S.; Haines, Christopher D.; Martin, Darold G.; Kapoor, Deepak

    2013-01-01

    The present paper shows the application of a three-dimensional coupled electrical, thermal, mechanical finite element macro-scale modeling framework of Spark Plasma Sintering (SPS) to an actual problem of SPS tooling overheating, encountered during SPS experimentation. The overheating phenomenon is analyzed by varying the geometry of the tooling that exhibits the problem, namely by modeling various tooling configurations involving sequences of disk-shape spacers with step-wise increasing radii. The analysis is conducted by means of finite element simulations, intended to obtain temperature spatial distributions in the graphite press-forms, including punches, dies, and spacers; to identify the temperature peaks and their respective timing, and to propose a more suitable SPS tooling configuration with the avoidance of the overheating as a final aim. Electric currents-based Joule heating, heat transfer, mechanical conditions, and densification are imbedded in the model, utilizing the finite-element software COMSOL™, which possesses a distinguishing ability of coupling multiple physics. Thereby the implementation of a finite element method applicable to a broad range of SPS procedures is carried out, together with the more specific optimization of the SPS tooling design when dealing with excessive heating phenomena. PMID:28811398

  12. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  13. Creation of an Upper Stage Trajectory Capability Boundary to Enable Booster System Trade Space Exploration

    NASA Technical Reports Server (NTRS)

    Walsh, Ptrick; Coulon, Adam; Edwards, Stephen; Mavris, Dimitri N.

    2012-01-01

    The problem of trajectory optimization is important in all space missions. The solution of this problem enables one to specify the optimum thrust steering program which should be followed to achieve a specified mission objective, simultaneously satisfying the constraints.1 It is well known that whether or not the ascent trajectory is optimal can have a significant impact on propellant usage for a given payload, or on payload weight for the same gross vehicle weight.2 Consequently, ascent guidance commands are usually optimized in some fashion. Multi-stage vehicles add complexity to this analysis process as changes in vehicle properties in one stage propagate to the other stages through gear ratios and changes in the optimal trajectory. These effects can cause an increase in analysis time as more variables are added and convergence of the optimizer to system closure requires more analysis iterations. In this paper, an approach to simplifying this multi-stage problem through the creation of an upper stage capability boundary is presented. This work was completed as part of a larger study focused on trade space exploration for the advanced booster system that will eventually form a part of NASA s new Space Launch System.3 The approach developed leverages Design of Experiments and Surrogate Modeling4 techniques to create a predictive model of the SLS upper stage performance. The design of the SLS core stages is considered fixed for the purposes of this study, which results in trajectory parameters such as staging conditions being the only variables relevant to the upper stage. Through the creation of a surrogate model, which takes staging conditions as inputs and predicts the payload mass delivered by the SLS upper stage to a reference orbit as the response, it is possible to identify a "surface" of staging conditions which all satisfy the SLS requirement of placing 130 metric tons into low-Earth orbit (LEO).3 This identified surface represents the 130 metric ton capability boundary for the upper stage, such that if the combined first stage and boosters can achieve any one staging point on that surface, then the design is identified as feasible. With the surrogate model created, design and analysis of advanced booster concepts is streamlined, as optimization of the upper stage trajectory is no longer required in every design loop.

  14. Optimizing finite element predictions of local subchondral bone structural stiffness using neural network-derived density-modulus relationships for proximal tibial subchondral cortical and trabecular bone.

    PubMed

    Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-01-01

    Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain. However, it is unclear what density-modulus equation(s) should be applied with subchondral cortical and subchondral trabecular bone when constructing finite element models of the tibia. Using a novel approach applying neural networks, optimization, and back-calculation against in situ experimental testing results, the objective of this study was to identify subchondral-specific equations that optimized finite element predictions of local structural stiffness at the proximal tibial subchondral surface. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using multiple density-modulus equations (93 total variations) then mapped to corresponding finite element models. For each variation, root mean squared error was calculated between finite element prediction and in situ measured stiffness at 47 indentation sites. Resulting errors were used to train an artificial neural network, which provided an unlimited number of model variations, with corresponding error, for predicting stiffness at the subchondral bone surface. Nelder-Mead optimization was used to identify optimum density-modulus equations for predicting stiffness. Finite element modeling predicted 81% of experimental stiffness variance (with 10.5% error) using optimized equations for subchondral cortical and trabecular bone differentiated with a 0.5g/cm 3 density. In comparison with published density-modulus relationships, optimized equations offered improved predictions of local subchondral structural stiffness. Further research is needed with anisotropy inclusion, a smaller voxel size and de-blurring algorithms to improve predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. MULTIPLE INPUT BINARY ADDER EMPLOYING MAGNETIC DRUM DIGITAL COMPUTING APPARATUS

    DOEpatents

    Cooke-Yarborough, E.H.

    1960-12-01

    A digital computing apparatus is described for adding a plurality of multi-digit binary numbers. The apparatus comprises a rotating magnetic drum, a recording head, first and second reading heads disposed adjacent to the first and second recording tracks, and a series of timing signals recorded on the first track. A series of N groups of digit-representing signals is delivered to the recording head at time intervals corresponding to the timing signals, each group consisting of digits of the same significance in the numbers, and the signal series is recorded on the second track of the drum in synchronism with the timing signals on the first track. The multistage registers are stepped cyclically through all positions, and each of the multistage registers is coupled to the control lead of a separate gate circuit to open the corresponding gate at only one selected position in each cycle. One of the gates has its input coupled to the bistable element to receive the sum digit, and the output lead of this gate is coupled to the recording device. The inputs of the other gates receive the digits to be added from the second reading head, and the outputs of these gates are coupled to the adding register. A phase-setting pulse source is connected to each of the multistage registers individually to step the multistage registers to different initial positions in the cycle, and the phase-setting pulse source is actuated each N time interval to shift a sum digit to the bistable element, where the multistage register coupled to bistable element is operated by the phase- setting pulse source to that position in its cycle N steps before opening the first gate, so that this gate opens in synchronism with each of the shifts to pass the sum digits to the recording head.

  16. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  17. A multiblock multigrid three-dimensional Euler equation solver

    NASA Technical Reports Server (NTRS)

    Cannizzaro, Frank E.; Elmiligui, Alaa; Melson, N. Duane; Vonlavante, E.

    1990-01-01

    Current aerodynamic designs are often quite complex (geometrically). Flexible computational tools are needed for the analysis of a wide range of configurations with both internal and external flows. In the past, geometrically dissimilar configurations required different analysis codes with different grid topologies in each. The duplicity of codes can be avoided with the use of a general multiblock formulation which can handle any grid topology. Rather than hard wiring the grid topology into the program, it is instead dictated by input to the program. In this work, the compressible Euler equations, written in a body-fitted finite-volume formulation, are solved using a pseudo-time-marching approach. Two upwind methods (van Leer's flux-vector-splitting and Roe's flux-differencing) were investigated. Two types of explicit solvers (a two-step predictor-corrector and a modified multistage Runge-Kutta) were used with multigrid acceleration to enhance convergence. A multiblock strategy is used to allow greater geometric flexibility. A report on simple explicit upwind schemes for solving compressible flows is included.

  18. Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng

    2014-05-01

    Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.

  19. A fast finite-difference algorithm for topology optimization of permanent magnets

    NASA Astrophysics Data System (ADS)

    Abert, Claas; Huber, Christian; Bruckner, Florian; Vogler, Christoph; Wautischer, Gregor; Suess, Dieter

    2017-09-01

    We present a finite-difference method for the topology optimization of permanent magnets that is based on the fast-Fourier-transform (FFT) accelerated computation of the stray-field. The presented method employs the density approach for topology optimization and uses an adjoint method for the gradient computation. Comparison to various state-of-the-art finite-element implementations shows a superior performance and accuracy. Moreover, the presented method is very flexible and easy to implement due to various preexisting FFT stray-field implementations that can be used.

  20. Direct Analysis in Real Time-Mass Spectrometry for the Rapid Detection of Metabolites of Aconite Alkaloids in Intestinal Bacteria

    NASA Astrophysics Data System (ADS)

    Li, Xue; Hou, Guangyue; Xing, Junpeng; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying

    2014-12-01

    In the present work, direct analysis of real time ionization combined with multi-stage tandem mass spectrometry (DART-MSn) was used to investigate the metabolic profile of aconite alkaloids in rat intestinal bacteria. A total of 36 metabolites from three aconite alkaloids were identified by using DART-MSn, and the feasibility of quantitative analysis of these analytes was examined. Key parameters of the DART ion source, such as helium gas temperature and pressure, the source-to-MS distance, and the speed of the autosampler, were optimized to achieve high sensitivity, enhance reproducibility, and reduce the occurrence of fragmentation. The instrument analysis time for one sample can be less than 10 s for this method. Compared with ESI-MS and UPLC-MS, the DART-MS is more efficient for directly detecting metabolic samples, and has the advantage of being a simple, high-speed, high-throughput method.

  1. Direct analysis in real time-mass spectrometry for the rapid detection of metabolites of aconite alkaloids in intestinal bacteria.

    PubMed

    Li, Xue; Hou, Guangyue; Xing, Junpeng; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying

    2014-12-01

    In the present work, direct analysis of real time ionization combined with multi-stage tandem mass spectrometry (DART-MS(n)) was used to investigate the metabolic profile of aconite alkaloids in rat intestinal bacteria. A total of 36 metabolites from three aconite alkaloids were identified by using DART-MS(n), and the feasibility of quantitative analysis of these analytes was examined. Key parameters of the DART ion source, such as helium gas temperature and pressure, the source-to-MS distance, and the speed of the autosampler, were optimized to achieve high sensitivity, enhance reproducibility, and reduce the occurrence of fragmentation. The instrument analysis time for one sample can be less than 10 s for this method. Compared with ESI-MS and UPLC-MS, the DART-MS is more efficient for directly detecting metabolic samples, and has the advantage of being a simple, high-speed, high-throughput method.

  2. Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1997-01-01

    A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.

  3. Multistage stereotactic radiosurgery for large cerebral arteriovenous malformations using the Gamma Knife platform.

    PubMed

    Ding, Chuxiong; Hrycushko, Brian; Whitworth, Louis; Li, Xiang; Nedzi, Lucien; Weprin, Bradley; Abdulrahman, Ramzi; Welch, Babu; Jiang, Steve B; Wardak, Zabi; Timmerman, Robert D

    2017-10-01

    Radiosurgery is an established technique to treat cerebral arteriovenous malformations (AVMs). Obliteration of larger AVMs (> 10-15 cm 3 or diameter > 3 cm) in a single session is challenging with current radiosurgery platforms due to toxicity. We present a novel technique of multistage stereotactic radiosurgery (SRS) for large intracranial arteriovenous malformations (AVM) using the Gamma Knife system. Eighteen patients with large (> 10-15 cm 3 or diameter > 3 cm) AVMs, which were previously treated using a staged SRS technique on the Cyberknife platform, were retrospectively selected for this study. The AVMs were contoured and divided into 3-8 subtargets to be treated sequentially in a staged approach at half to 4 week intervals. The prescription dose ranged from 15 Gy to 20 Gy, depending on the subtarget number, volume, and location. Gamma Knife plans using multiple collimator settings were generated and optimized. The coordinates of each shot from the initial plan covering the total AVM target were extracted based on their relative positions within the frame system. The shots were regrouped based on their location with respect to the subtarget contours to generate subplans for each stage. The delivery time of each shot for a subtarget was decay corrected with 60 Co for staging the treatment course to generate the same dose distribution as that planned for the total AVM target. Conformality indices and dose-volume analysis were performed to evaluate treatment plans. With the shot redistribution technique, the composite dose for the multistaged treatment of multiple subtargets is equivalent to the initial plan for total AVM target. Gamma Knife plans resulted in an average PTV coverage of 96.3 ± 0.9% and a PITV of 1.23 ± 0.1. The resulting Conformality indices, V 12Gy and R 50 dose spillage values were 0.76 ± 0.05, 3.4 ± 1.8, and 3.1 ± 0.5 respectively. The Gamma Knife system can deliver a multistaged conformal dose to treat large AVMs when correcting for translational setup errors of each shot at each staged treatment. © 2017 American Association of Physicists in Medicine.

  4. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  5. Reliability of system for precise cold forging

    NASA Astrophysics Data System (ADS)

    Krušič, Vid; Rodič, Tomaž

    2017-07-01

    The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.

  6. Image simulation for automatic license plate recognition

    NASA Astrophysics Data System (ADS)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  7. Neural network for control of rearrangeable Clos networks.

    PubMed

    Park, Y K; Cherkassky, V

    1994-09-01

    Rapid evolution in the field of communication networks requires high speed switching technologies. This involves a high degree of parallelism in switching control and routing performed at the hardware level. The multistage crossbar networks have always been attractive to switch designers. In this paper a neural network approach to controlling a three-stage Clos network in real time is proposed. This controller provides optimal routing of communication traffic requests on a call-by-call basis by rearranging existing connections, with a minimum length of rearrangement sequence so that a new blocked call request can be accommodated. The proposed neural network controller uses Paull's rearrangement algorithm, along with the special (least used) switch selection rule in order to minimize the length of rearrangement sequences. The functional behavior of our model is verified by simulations and it is shown that the convergence time required for finding an optimal solution is constant, regardless of the switching network size. The performance is evaluated for random traffic with various traffic loads. Simulation results show that applying the least used switch selection rule increases the efficiency in switch rearrangements, reducing the network convergence time. The implementation aspects are also discussed to show the feasibility of the proposed approach.

  8. Finite Optimal Stopping Problems: The Seller's Perspective

    ERIC Educational Resources Information Center

    Hemmati, Mehdi; Smith, J. Cole

    2011-01-01

    We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…

  9. Multiple-stage decisions in a marine central-place forager

    PubMed Central

    Friedlaender, Ari S.; Johnston, David W.; Tyson, Reny B.; Kaltenberg, Amanda; Goldbogen, Jeremy A.; Stimpert, Alison K.; Curtice, Corrie; Hazen, Elliott L.; Halpin, Patrick N.; Read, Andrew J.; Nowacek, Douglas P.

    2016-01-01

    Air-breathing marine animals face a complex set of physical challenges associated with diving that affect the decisions of how to optimize feeding. Baleen whales (Mysticeti) have evolved bulk-filter feeding mechanisms to efficiently feed on dense prey patches. Baleen whales are central place foragers where oxygen at the surface represents the central place and depth acts as the distance to prey. Although hypothesized that baleen whales will target the densest prey patches anywhere in the water column, how depth and density interact to influence foraging behaviour is poorly understood. We used multi-sensor archival tags and active acoustics to quantify Antarctic humpback whale foraging behaviour relative to prey. Our analyses reveal multi-stage foraging decisions driven by both krill depth and density. During daylight hours when whales did not feed, krill were found in deep high-density patches. As krill migrated vertically into larger and less dense patches near the surface, whales began to forage. During foraging bouts, we found that feeding rates (number of feeding lunges per hour) were greatest when prey was shallowest, and feeding rates decreased with increasing dive depth. This strategy is consistent with previous models of how air-breathing diving animals optimize foraging efficiency. Thus, humpback whales forage mainly when prey is more broadly distributed and shallower, presumably to minimize diving and searching costs and to increase feeding rates overall and thus foraging efficiency. Using direct measurements of feeding behaviour from animal-borne tags and prey availability from echosounders, our study demonstrates a multi-stage foraging process in a central place forager that we suggest acts to optimize overall efficiency by maximizing net energy gain over time. These data reveal a previously unrecognized level of complexity in predator–prey interactions and underscores the need to simultaneously measure prey distribution in marine central place forager studies. PMID:27293784

  10. Thermodynamical analysis of a quantum heat engine based on harmonic oscillators.

    PubMed

    Insinga, Andrea; Andresen, Bjarne; Salamon, Peter

    2016-07-01

    Many models of heat engines have been studied with the tools of finite-time thermodynamics and an ensemble of independent quantum systems as the working fluid. Because of their convenient analytical properties, harmonic oscillators are the most frequently used example of a quantum system. We analyze different thermodynamical aspects with the final aim of the optimization of the performance of the engine in terms of the mechanical power provided during a finite-time Otto cycle. The heat exchange mechanism between the working fluid and the thermal reservoirs is provided by the Lindblad formalism. We describe an analytical method to find the limit cycle and give conditions for a stable limit cycle to exist. We explore the power production landscape as the duration of the four branches of the cycle are varied for short times, intermediate times, and special frictionless times. For short times we find a periodic structure with atolls of purely dissipative operation surrounding islands of divergent behavior where, rather than tending to a limit cycle, the working fluid accumulates more and more energy. For frictionless times the periodic structure is gone and we come very close to the global optimal operation. The global optimum is found and interestingly comes with a particular value of the cycle time.

  11. Multi-level adaptive finite element methods. 1: Variation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1979-01-01

    A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.

  12. Machining fixture layout optimization using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Dou, Jianping; Wang, Xingsong; Wang, Lei

    2011-05-01

    Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.

  13. Finite Set Control Transcription for Optimal Control Applications

    DTIC Science & Technology

    2009-05-01

    Figures 1.1 The Parameters of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Categories of Optimization Algorithms ...Programming (NLP) algorithm , such as SNOPT2 (hereafter, called the optimizer). The Finite Set Control Transcription (FSCT) method is essentially a...artificial neural networks, ge- netic algorithms , or combinations thereof for analysis.4,5 Indeed, an actual biological neural network is an example of

  14. Impact of ultrasound on solid-liquid extraction of phenolic compounds from maritime pine sawdust waste. Kinetics, optimization and large scale experiments.

    PubMed

    Meullemiestre, A; Petitcolas, E; Maache-Rezzoug, Z; Chemat, F; Rezzoug, S A

    2016-01-01

    Maritime pine sawdust, a by-product from industry of wood transformation, has been investigated as a potential source of polyphenols which were extracted by ultrasound-assisted maceration (UAM). UAM was optimized for enhancing extraction efficiency of polyphenols and reducing time-consuming. In a first time, a preliminary study was carried out to optimize the solid/liquid ratio (6g of dry material per mL) and the particle size (0.26 cm(2)) by conventional maceration (CVM). Under these conditions, the optimum conditions for polyphenols extraction by UAM, obtained by response surface methodology, were 0.67 W/cm(2) for the ultrasonic intensity (UI), 40°C for the processing temperature (T) and 43 min for the sonication time (t). UAM was compared with CVM, the results showed that the quantity of polyphenols was improved by 40% (342.4 and 233.5mg of catechin equivalent per 100g of dry basis, respectively for UAM and CVM). A multistage cross-current extraction procedure allowed evaluating the real impact of UAM on the solid-liquid extraction enhancement. The potential industrialization of this procedure was implemented through a transition from a lab sonicated reactor (3 L) to a large scale one with 30 L volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  16. Coefficient of performance for a low-dissipation Carnot-like refrigerator with nonadiabatic dissipation

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Wu, Feifei; Ma, Yongli; He, Jizhou; Wang, Jianhui; Hernández, A. Calvo; Roco, J. M. M.

    2013-12-01

    We study the coefficient of performance (COP) and its bounds for a Carnot-like refrigerator working between two heat reservoirs at constant temperatures Th and Tc, under two optimization criteria χ and Ω. In view of the fact that an “adiabatic” process usually takes finite time and is nonisentropic, the nonadiabatic dissipation and the finite time required for the adiabatic processes are taken into account by assuming low dissipation. For given optimization criteria, we find that the lower and upper bounds of the COP are the same as the corresponding ones obtained from the previous idealized models where any adiabatic process is undergone instantaneously with constant entropy. To describe some particular models with very fast adiabatic transitions, we also consider the influence of the nonadiabatic dissipation on the bounds of the COP, under the assumption that the irreversible entropy production in the adiabatic process is constant and independent of time. Our theoretical predictions match the observed COPs of real refrigerators more closely than the ones derived in the previous models, providing a strong argument in favor of our approach.

  17. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization.

    PubMed

    Fujarewicz, Krzysztof; Lakomiec, Krzysztof

    2016-12-01

    We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.

  18. Thermodynamic metrics and optimal paths.

    PubMed

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  19. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  20. A spatial stochastic programming model for timber and core area management under risk of fires

    Treesearch

    Yu Wei; Michael Bevers; Dung Nguyen; Erin Belval

    2014-01-01

    Previous stochastic models in harvest scheduling seldom address explicit spatial management concerns under the influence of natural disturbances. We employ multistage stochastic programming models to explore the challenges and advantages of building spatial optimization models that account for the influences of random stand-replacing fires. Our exploratory test models...

  1. Simulation and optimization of a dc SQUID with finite capacitance

    NASA Astrophysics Data System (ADS)

    de Waal, V. J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.

  2. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  3. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  4. Prefire identification for pulse-power systems

    DOEpatents

    Longmire, J.L.; Thuot, M.E.; Warren, D.S.

    1982-08-23

    Prefires in a high-power, high-frequency, multi-stage pulse generator are detected by a system having an EMI shielded pulse timing transmitter associated with and tailored to each stage of the pulse generator. Each pulse timing transmitter upon detection of a pulse triggers a laser diode to send an optical signal through a high frequency fiber optic cable to a pulse timing receiver which converts the optical signal to an electrical pulse. The electrical pulses from all pulse timing receivers are fed through an OR circuit to start a time interval measuring device and each electrical pulse is used to stop an individual channel in the measuring device thereby recording the firing sequence of the multi-stage pulse generator.

  5. Prefire identification for pulse power systems

    DOEpatents

    Longmire, Jerry L.; Thuot, Michael E.; Warren, David S.

    1985-01-01

    Prefires in a high-power, high-frequency, multi-stage pulse generator are detected by a system having an EMI shielded pulse timing transmitter associated with and tailored to each stage of the pulse generator. Each pulse timing transmitter upon detection of a pulse triggers a laser diode to send an optical signal through a high frequency fiber optic cable to a pulse timing receiver which converts the optical signal to an electrical pulse. The electrical pulses from all pulse timing receivers are fed through an OR circuit to start a time interval measuring device and each electrical pulse is used to stop an individual channel in the measuring device thereby recording the firing sequence of the multi-stage pulse generator.

  6. Inversion of Robin coefficient by a spectral stochastic finite element approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Bangti; Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  7. Multi-stage phononic crystal structure for anchor-loss reduction of thin-film piezoelectric-on-silicon microelectromechanical-system resonator

    NASA Astrophysics Data System (ADS)

    Bao, Fei-Hong; Bao, Lei-Lei; Li, Xin-Yi; Ammar Khan, Muhammad; Wu, Hua-Ye; Qin, Feng; Zhang, Ting; Zhang, Yi; Bao, Jing-Fu; Zhang, Xiao-Sheng

    2018-06-01

    Thin-film piezoelectric-on-silicon acoustic wave resonators are promising for the development of system-on-chip integrated circuits with micro/nano-engineered timing reference. However, in order to realize their large potentials, a further enhancement of the quality factor (Q) is required. In this study, a novel approach, based on a multi-stage phononic crystal (PnC) structure, was proposed to achieve an ultra-high Q. A systematical study revealed that the multi-stage PnC structure formed a frequency-selective band-gap to effectively prohibit the dissipation of acoustic waves through tethers, which significantly reduced the anchor loss, leading to an insertion-loss reduction and enhancement of Q. The maximum unloaded Q u of the fabricated resonators reached the value of ∼10,000 at 109.85 MHz, indicating an enhancement by 19.4 times.

  8. One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1991-01-01

    The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.

  9. Near-Optimal Tracking Control of Mobile Robots Via Receding-Horizon Dual Heuristic Programming.

    PubMed

    Lian, Chuanqiang; Xu, Xin; Chen, Hong; He, Haibo

    2016-11-01

    Trajectory tracking control of wheeled mobile robots (WMRs) has been an important research topic in control theory and robotics. Although various tracking control methods with stability have been developed for WMRs, it is still difficult to design optimal or near-optimal tracking controller under uncertainties and disturbances. In this paper, a near-optimal tracking control method is presented for WMRs based on receding-horizon dual heuristic programming (RHDHP). In the proposed method, a backstepping kinematic controller is designed to generate desired velocity profiles and the receding horizon strategy is used to decompose the infinite-horizon optimal control problem into a series of finite-horizon optimal control problems. In each horizon, a closed-loop tracking control policy is successively updated using a class of approximate dynamic programming algorithms called finite-horizon dual heuristic programming (DHP). The convergence property of the proposed method is analyzed and it is shown that the tracking control system based on RHDHP is asymptotically stable by using the Lyapunov approach. Simulation results on three tracking control problems demonstrate that the proposed method has improved control performance when compared with conventional model predictive control (MPC) and DHP. It is also illustrated that the proposed method has lower computational burden than conventional MPC, which is very beneficial for real-time tracking control.

  10. Approximate Dynamic Programming for Military Resource Allocation

    DTIC Science & Technology

    2014-12-26

    difference in means for W = 200, T = 200 ( c ) W = 200, T = 200 5 10 15 20 25 30 35 40 45 50 −2 0 2 4 6 8 Problem Number M ea n A D P − M ea n M M R 95...will provide analysts with a means for effectively determining which weapons concepts to explore further, how to appropriately fit a set of aircraft ...which optimization of the multi-stage DWTA is used to determine optimal weaponeering of aircraft . Because of its flexibility and applicability to

  11. Decision Support Requirements in a Unified Life Cycle Engineering (ULCE) Environment. Volume 2. Conceptual Approaches to Optimization.

    DTIC Science & Technology

    1988-05-01

    the meet ehidmli i thm e mpesm of rmbrme pap Ii bprmaeIea s, IDA Mwmaim Ampad le eI.te umm emOw casm d One IqIammeis er~ wh eMA ls is mmidsmwkdMle...in turn, is controlled by the units above it. Dynamic programming is a mathematical technique well suited for optimization of multistage models. This...interval to a desired accuracy. Several region elimination methods have been discussed in the literature, including the Golden Section, Fibonacci

  12. Optimal pricing and replenishment policies for instantaneous deteriorating items with backlogging and trade credit under inflation

    NASA Astrophysics Data System (ADS)

    Sundara Rajan, R.; Uthayakumar, R.

    2017-12-01

    In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.

  13. Evaluation and optimization of footwear comfort parameters using finite element analysis and a discrete optimization algorithm

    NASA Astrophysics Data System (ADS)

    Papagiannis, P.; Azariadis, P.; Papanikos, P.

    2017-10-01

    Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.

  14. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  15. An Optimization Study of Hot Stamping Operation

    NASA Astrophysics Data System (ADS)

    Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron

    2010-06-01

    In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.

  16. A finite-element toolbox for the stationary Gross-Pitaevskii equation with rotation

    NASA Astrophysics Data System (ADS)

    Vergez, Guillaume; Danaila, Ionut; Auliac, Sylvain; Hecht, Frédéric

    2016-12-01

    We present a new numerical system using classical finite elements with mesh adaptivity for computing stationary solutions of the Gross-Pitaevskii equation. The programs are written as a toolbox for FreeFem++ (www.freefem.org), a free finite-element software available for all existing operating systems. This offers the advantage to hide all technical issues related to the implementation of the finite element method, allowing to easily code various numerical algorithms. Two robust and optimized numerical methods were implemented to minimize the Gross-Pitaevskii energy: a steepest descent method based on Sobolev gradients and a minimization algorithm based on the state-of-the-art optimization library Ipopt. For both methods, mesh adaptivity strategies are used to reduce the computational time and increase the local spatial accuracy when vortices are present. Different run cases are made available for 2D and 3D configurations of Bose-Einstein condensates in rotation. An optional graphical user interface is also provided, allowing to easily run predefined cases or with user-defined parameter files. We also provide several post-processing tools (like the identification of quantized vortices) that could help in extracting physical features from the simulations. The toolbox is extremely versatile and can be easily adapted to deal with different physical models.

  17. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.

    PubMed

    Westgard, James O; Bayat, Hassan; Westgard, Sten A

    2018-02-01

    To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.

  18. Multi-stage pulse tube cryocooler with acoustic impedance constructed to reduce transient cool down time and thermal loss

    NASA Technical Reports Server (NTRS)

    Gedeon, David R. (Inventor); Wilson, Kyle B. (Inventor)

    2008-01-01

    The cool down time for a multi-stage, pulse tube cryocooler is reduced by configuring at least a portion of the acoustic impedance of a selected stage, higher than the first stage, so that it surrounds the cold head of the selected stage. The surrounding acoustic impedance of the selected stage is mounted in thermally conductive connection to the warm region of the selected stage for cooling the acoustic impedance and is fabricated of a high thermal diffusivity, low thermal radiation emissivity material, preferably aluminum.

  19. Inversion of geophysical potential field data using the finite element method

    NASA Astrophysics Data System (ADS)

    Lamichhane, Bishnu P.; Gross, Lutz

    2017-12-01

    The inversion of geophysical potential field data can be formulated as an optimization problem with a constraint in the form of a partial differential equation (PDE). It is common practice, if possible, to provide an analytical solution for the forward problem and to reduce the problem to a finite dimensional optimization problem. In an alternative approach the optimization is applied to the problem and the resulting continuous problem which is defined by a set of coupled PDEs is subsequently solved using a standard PDE discretization method, such as the finite element method (FEM). In this paper, we show that under very mild conditions on the data misfit functional and the forward problem in the three-dimensional space, the continuous optimization problem and its FEM discretization are well-posed including the existence and uniqueness of respective solutions. We provide error estimates for the FEM solution. A main result of the paper is that the FEM spaces used for the forward problem and the Lagrange multiplier need to be identical but can be chosen independently from the FEM space used to represent the unknown physical property. We will demonstrate the convergence of the solution approximations in a numerical example. The second numerical example which investigates the selection of FEM spaces, shows that from the perspective of computational efficiency one should use 2 to 4 times finer mesh for the forward problem in comparison to the mesh of the physical property.

  20. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    NASA Astrophysics Data System (ADS)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  1. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  2. Economic and environmental costs of regulatory uncertainty for coal-fired power plants.

    PubMed

    Patiño-Echeverri, Dalia; Fischbeck, Paul; Kriegler, Elmar

    2009-02-01

    Uncertainty about the extent and timing of CO2 emissions regulations for the electricity-generating sector exacerbates the difficulty of selecting investment strategies for retrofitting or alternatively replacing existent coal-fired power plants. This may result in inefficient investments imposing economic and environmental costs to society. In this paper, we construct a multiperiod decision model with an embedded multistage stochastic dynamic program minimizing the expected total costs of plant operation, installations, and pollution allowances. We use the model to forecast optimal sequential investment decisions of a power plant operator with and without uncertainty about future CO2 allowance prices. The comparison of the two cases demonstrates that uncertainty on future CO2 emissions regulations might cause significant economic costs and higher air emissions.

  3. Algorithms and analyses for stochastic optimization for turbofan noise reduction using parallel reduced-order modeling

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Gunzburger, Max

    2017-06-01

    Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.

  4. Optimal Protocols and Optimal Transport in Stochastic Thermodynamics

    NASA Astrophysics Data System (ADS)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-01

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  5. Optimal protocols and optimal transport in stochastic thermodynamics.

    PubMed

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-24

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  6. Memory effects in funnel ratchet of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Hu, Cai-Tian; Wu, Jian-Chun; Ai, Bao-Quan

    2017-05-01

    The transport of self-propelled particles with memory effects is investigated in a two-dimensional periodic channel. Funnel-shaped barriers are regularly arrayed in the channel. Due to the asymmetry of the barriers, the self-propelled particles can be rectified. It is found that the memory effects of the rotational diffusion can strongly affect the rectified transport. The memory effects do not always break the rectified transport, and there exists an optimal finite value of correlation time at which the rectified efficiency takes its maximal value. We also find that the optimal values of parameters (the self-propulsion speed, the translocation diffusion coefficient, the rotational noise intensity, and the self-rotational diffusion coefficient) can facilitate the rectified transport. When introducing a finite load, particles with different self-propulsion speeds move to different directions and can be separated.

  7. Closed-form solutions for a class of optimal quadratic regulator problems with terminal constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Turner, J. D.; Chun, H. M.

    1984-01-01

    Closed-form solutions are derived for coupled Riccati-like matrix differential equations describing the solution of a class of optimal finite time quadratic regulator problems with terminal constraints. Analytical solutions are obtained for the feedback gains and the closed-loop response trajectory. A computational procedure is presented which introduces new variables for efficient computation of the terminal control law. Two examples are given to illustrate the validity and usefulness of the theory.

  8. Study on Edge Thickening Flow Forming Using the Finite Elements Analysis

    NASA Astrophysics Data System (ADS)

    Kim, Young Jin; Park, Jin Sung; Cho, Chongdu

    2011-08-01

    This study is to examine the forming features of flow stress property and the incremental forming method with increasing the thickness of material. Recently, the optimized forming method is widely studied through the finite element analysis to optimize forming process conditions in many different forming fields. The optimal forming method should be adopted to meet geometric requirements as the reduction in volume per unit length of material such as forging, rolling, spinning etc. However conventional studies have not dealt with issue regarding volume per unit length. For the study we use the finite element method and model a gear part of an automotive engine flywheel as the study model, which is a weld assembly of a plate and a gear with respective different thickness. In simulation of the present study, a optimized forming condition for gear machining, considering the thickness of the outer edge of flywheel is studied using the finite elements analysis for the increasing thickness of the forming method. It is concluded from the study that forming method to increase the thickness per unit length for gear machining is reasonable using the finite elements analysis and forming test.

  9. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  10. Deep Vein Thrombosis After Complex Posterior Spine Surgery: Does Staged Surgery Make a Difference?

    PubMed

    Edwards, Charles C; Lessing, Noah L; Ford, Lisa; Edwards, Charles C

    Retrospective review of a prospectively collected database. To assess the incidence of deep vein thrombosis (DVT) associated with single- versus multistage posterior-only complex spinal surgeries. Dividing the physiologic burden of spinal deformity surgery into multiple stages has been suggested as a potential means of reducing perioperative complications. DVT is a worrisome complication owing to its potential to lead to pulmonary embolism. Whether or not staging affects DVT incidence in this population is unknown. Consecutive patients undergoing either single- or multistage posterior complex spinal surgeries over a 12-year period at a single institution were eligible. All patients received lower extremity venous duplex ultrasonographic (US) examinations 2 to 4 days postoperatively in the single-stage group and 2 to 4 days postoperatively after each stage in the multistage group. Multivariate logistic regression was used to assess the independent contribution of staging to developing a DVT. A total of 107 consecutive patients were enrolled-26 underwent multistage surgery and 81 underwent single-stage surgery. The single-stage group was older (63 years vs. 45 years; p < .01) and had a higher Charlson comorbidity index (2.25 ± 1.27 vs. 1.23 ± 1.58; p < .01). More multistage patients had positive US tests than single-stage patients (5 of 26 vs. 6 of 81; 19% vs. 7%; p = .13). Adjusting for all the above-mentioned covariates, a multistage surgery was 8.17 (95% CI 0.35-250.6) times more likely to yield a DVT than a single-stage surgery. Patients who undergo multistage posterior complex spine surgery are at a high risk for developing a DVT compared to those who undergo single-stage procedures. The difference in DVT incidence may be understated as the multistage group had a lower pre- and intraoperative risk profile with a younger age, lower medical comorbidities, and less per-stage blood loss. Copyright © 2017 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  11. The Optimal Capital Stock and Consumption Evolution for Non Zero Consumers Growth Rate in the Framework of Ramsey Model on Finite Horizon

    NASA Astrophysics Data System (ADS)

    Bonchiş, N.; Balint, Şt.

    2010-09-01

    In this paper the Ramsey optimal growth of the capital stock and consumption on finite horizon is analyzed when the growth rate of consumers is strictly positive. The main purpose is to establish the dependence of the optimal capital stock and consumption evolution on the growth rate of consumers. The analysis reveals: for any initial value k0≥0 there exists a unique optimal evolution path of length N+1 for the capital stock; if k0 is strictly positive then all the elements of the optimal capital stock evolution path are strictly positives except the last one which is zero; the optimal capital stock evolution of length N+1 starting from k0≥0 satisfies the Euler equation; the value function VN is strictly increasing, strictly concave and continuous on R+. The family of functions {VN-T}T = 0…N-1 satisfies the Bellman equation and it is the unique solution of this equation which is both continuous and satisfies the transversality condition. The Mangasarian Lemma is also satisfied. For N tending to infinity the optimal evolution path of length N of the capital stock tends to those on the infinite time horizon. For any k0>0 the value function in k0 decreases when the consumers growth rate increases.

  12. Multi-stage approach for structural damage detection problem using basis pursuit and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gerist, Saleheh; Maheri, Mahmoud R.

    2016-12-01

    In order to solve structural damage detection problem, a multi-stage method using particle swarm optimization is presented. First, a new spars recovery method, named Basis Pursuit (BP), is utilized to preliminarily identify structural damage locations. The BP method solves a system of equations which relates the damage parameters to the structural modal responses using the sensitivity matrix. Then, the results of this stage are subsequently enhanced to the exact damage locations and extents using the PSO search engine. Finally, the search space is reduced by elimination of some low damage variables using micro search (MS) operator embedded in the PSO algorithm. To overcome the noise present in structural responses, a method known as Basis Pursuit De-Noising (BPDN) is also used. The efficiency of the proposed method is investigated by three numerical examples: a cantilever beam, a plane truss and a portal plane frame. The frequency response is used to detect damage in the examples. The simulation results demonstrate the accuracy and efficiency of the proposed method in detecting multiple damage cases and exhibit its robustness regarding noise and its advantages compared to other reported solution algorithms.

  13. Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  14. Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  15. SU-E-T-480: Radiobiological Dose Comparison of Single Fraction SRS, Multi-Fraction SRT and Multi-Stage SRS of Large Target Volumes Using the Linear-Quadratic Formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, C; Hrycushko, B; Jiang, S

    2014-06-01

    Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm{sup 3}) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan,more » the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment.« less

  16. On use of the multistage dose-response model for assessing laboratory animal carcinogenicity

    PubMed Central

    Nitcheva, Daniella; Piegorsch, Walter W.; West, R. Webster

    2007-01-01

    We explore how well a statistical multistage model describes dose-response patterns in laboratory animal carcinogenicity experiments from a large database of quantal response data. The data are collected from the U.S. EPA’s publicly available IRIS data warehouse and examined statistically to determine how often higher-order values in the multistage predictor yield significant improvements in explanatory power over lower-order values. Our results suggest that the addition of a second-order parameter to the model only improves the fit about 20% of the time, while adding even higher-order terms apparently does not contribute to the fit at all, at least with the study designs we captured in the IRIS database. Also included is an examination of statistical tests for assessing significance of higher-order terms in a multistage dose-response model. It is noted that bootstrap testing methodology appears to offer greater stability for performing the hypothesis tests than a more-common, but possibly unstable, “Wald” test. PMID:17490794

  17. Optimizing Metabolite Production Using Periodic Oscillations

    PubMed Central

    Sowa, Steven W.; Baldea, Michael; Contreras, Lydia M.

    2014-01-01

    Methods for improving microbial strains for metabolite production remain the subject of constant research. Traditionally, metabolic tuning has been mostly limited to knockouts or overexpression of pathway genes and regulators. In this paper, we establish a new method to control metabolism by inducing optimally tuned time-oscillations in the levels of selected clusters of enzymes, as an alternative strategy to increase the production of a desired metabolite. Using an established kinetic model of the central carbon metabolism of Escherichia coli, we formulate this concept as a dynamic optimization problem over an extended, but finite time horizon. Total production of a metabolite of interest (in this case, phosphoenolpyruvate, PEP) is established as the objective function and time-varying concentrations of the cellular enzymes are used as decision variables. We observe that by varying, in an optimal fashion, levels of key enzymes in time, PEP production increases significantly compared to the unoptimized system. We demonstrate that oscillations can improve metabolic output in experimentally feasible synthetic circuits. PMID:24901332

  18. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  19. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  20. Stochastic Adaptive Estimation and Control.

    DTIC Science & Technology

    1994-10-26

    Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and

  1. The optimal design support system for shell components of vehicles using the methods of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Szczepanik, M.; Poteralski, A.

    2016-11-01

    The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.

  2. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  3. Efficiencies of power plants, quasi-static models and the geometric-mean temperature

    NASA Astrophysics Data System (ADS)

    Johal, Ramandeep S.

    2017-02-01

    Observed efficiencies of industrial power plants are often approximated by the square-root formula: 1 - √ T -/ T +, where T +( T -) is the highest (lowest) temperature achieved in the plant. This expression can be derived within finite-time thermodynamics, or, by entropy generation minimization, based on finite rates for the processes. In these analyses, a closely related quantity is the optimal value of the intermediate temperature for the hot stream, given by the geometric-mean value: √ T +/ T -. In this paper, instead of finite-time models, we propose to model the operation of plants by quasi-static work extraction models, with one reservoir (source/sink) as finite, while the other as practically infinite. No simplifying assumption is made on the nature of the finite system. This description is consistent with two model hypotheses, each yielding a specific value of the intermediate temperature, say T 1 and T 2. The lack of additional information on validity of the hypothesis that may be actually realized, motivates to approach the problem as an exercise in inductive inference. Thus we define an expected value of the intermediate temperature as the equally weighted mean: ( T 1 + T 2)/2. It is shown that the expected value is very closely given by the geometric-mean value for almost all of the observed power plants.

  4. Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic

    NASA Astrophysics Data System (ADS)

    Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie

    2018-02-01

    As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.

  5. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivak, David; Crooks, Gavin

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  7. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  8. Enabling Quantitative Optical Imaging for In-die-capable Critical Dimension Targets

    PubMed Central

    Barnes, B.M.; Henn, M.-A.; Sohn, M. Y.; Zhou, H.; Silver, R. M.

    2017-01-01

    Dimensional scaling trends will eventually bring semiconductor critical dimensions (CDs) down to only a few atoms in width. New optical techniques are required to address the measurement and variability for these CDs using sufficiently small in-die metrology targets. Recently, Qin et al. [Light Sci Appl, 5, e16038 (2016)] demonstrated quantitative model-based measurements of finite sets of lines with features as small as 16 nm using 450 nm wavelength light. This paper uses simulation studies, augmented with experiments at 193 nm wavelength, to adapt and optimize the finite sets of features that work as in-die-capable metrology targets with minimal increases in parametric uncertainty. A finite element based solver for time-harmonic Maxwell's equations yields two- and three-dimensional simulations of the electromagnetic scattering for optimizing the design of such targets as functions of reduced line lengths, fewer number of lines, fewer focal positions, smaller critical dimensions, and shorter illumination wavelength. Metrology targets that exceeded performance requirements are as short as 3 μm for 193 nm light, feature as few as eight lines, and are extensible to sub-10 nm CDs. Target areas measured at 193 nm can be fifteen times smaller in area than current state-of-the-art scatterometry targets described in the literature. This new methodology is demonstrated to be a promising alternative for optical model-based in-die CD metrology. PMID:28757674

  9. Partial differential equation methods for stochastic dynamic optimization: an application to wind power generation with energy storage.

    PubMed

    Johnson, Paul; Howell, Sydney; Duck, Peter

    2017-08-13

    A mixed financial/physical partial differential equation (PDE) can optimize the joint earnings of a single wind power generator (WPG) and a generic energy storage device (ESD). Physically, the PDE includes constraints on the ESD's capacity, efficiency and maximum speeds of charge and discharge. There is a mean-reverting daily stochastic cycle for WPG power output. Physically, energy can only be produced or delivered at finite rates. All suppliers must commit hourly to a finite rate of delivery C , which is a continuous control variable that is changed hourly. Financially, we assume heavy 'system balancing' penalties in continuous time, for deviations of output rate from the commitment C Also, the electricity spot price follows a mean-reverting stochastic cycle with a strong evening peak, when system balancing penalties also peak. Hence the economic goal of the WPG plus ESD, at each decision point, is to maximize expected net present value (NPV) of all earnings (arbitrage) minus the NPV of all expected system balancing penalties, along all financially/physically feasible future paths through state space. Given the capital costs for the various combinations of the physical parameters, the design and operating rules for a WPG plus ESD in a finite market may be jointly optimizable.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  10. Structural Analysis Methods for Structural Health Management of Future Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander

    2007-01-01

    Two finite element based computational methods, Smoothing Element Analysis (SEA) and the inverse Finite Element Method (iFEM), are reviewed, and examples of their use for structural health monitoring are discussed. Due to their versatility, robustness, and computational efficiency, the methods are well suited for real-time structural health monitoring of future space vehicles, large space structures, and habitats. The methods may be effectively employed to enable real-time processing of sensing information, specifically for identifying three-dimensional deformed structural shapes as well as the internal loads. In addition, they may be used in conjunction with evolutionary algorithms to design optimally distributed sensors. These computational tools have demonstrated substantial promise for utilization in future Structural Health Management (SHM) systems.

  11. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  12. Optimal protocol for maximum work extraction in a feedback process with a time-varying potential

    NASA Astrophysics Data System (ADS)

    Kwon, Chulan

    2017-12-01

    The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.

  13. Symmetric tridiagonal structure preserving finite element model updating problem for the quadratic model

    NASA Astrophysics Data System (ADS)

    Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath

    2018-07-01

    One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.

  14. Weak Galerkin method for the Biot’s consolidation model

    DOE PAGES

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    2017-08-23

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  15. Weak Galerkin method for the Biot’s consolidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  16. Lack of a thermodynamic finite-temperature spin-glass phase in the two-dimensional randomly coupled ferromagnet

    NASA Astrophysics Data System (ADS)

    Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.

    2018-05-01

    The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.

  17. Eyeglasses Lens Contour Extraction from Facial Images Using an Efficient Shape Description

    PubMed Central

    Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu

    2013-01-01

    This paper presents a system that automatically extracts the position of the eyeglasses and the accurate shape and size of the frame lenses in facial images. The novelty brought by this paper consists in three key contributions. The first one is an original model for representing the shape of the eyeglasses lens, using Fourier descriptors. The second one is a method for generating the search space starting from a finite, relatively small number of representative lens shapes based on Fourier morphing. Finally, we propose an accurate lens contour extraction algorithm using a multi-stage Monte Carlo sampling technique. Multiple experiments demonstrate the effectiveness of our approach. PMID:24152926

  18. Computations of unsteady multistage compressor flows in a workstation environment

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen L.

    1992-01-01

    High-end graphics workstations are becoming a necessary tool in the computational fluid dynamics environment. In addition to their graphic capabilities, workstations of the latest generation have powerful floating-point-operation capabilities. As workstations become common, they could provide valuable computing time for such applications as turbomachinery flow calculations. This report discusses the issues involved in implementing an unsteady, viscous multistage-turbomachinery code (STAGE-2) on workstations. It then describes work in which the workstation version of STAGE-2 was used to study the effects of axial-gap spacing on the time-averaged and unsteady flow within a 2 1/2-stage compressor. The results included time-averaged surface pressures, time-averaged pressure contours, standard deviation of pressure contours, pressure amplitudes, and force polar plots.

  19. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  20. Development of thermal control methods for specialized components and scientific instruments at very low temperatures (follow-on)

    NASA Technical Reports Server (NTRS)

    Wright, J. P.; Wilson, D. E.

    1976-01-01

    Many payloads currently proposed to be flown by the space shuttle system require long-duration cooling in the 3 to 200 K temperature range. Common requirements also exist for certain DOD payloads. Parametric design and optimization studies are reported for multistage and diode heat pipe radiator systems designed to operate in this temperature range. Also optimized are ground test systems for two long-life passive thermal control concepts operating under specified space environmental conditions. The ground test systems evaluated are ultimately intended to evolve into flight test qualification prototypes for early shuttle flights.

  1. Finite-difference simulation and visualization of elastodynamics in time-evolving generalized curvilinear coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K. (Inventor)

    2009-01-01

    Modeling and simulation of free and forced structural vibrations is essential to an overall structural health monitoring capability. In the various embodiments, a first principles finite-difference approach is adopted in modeling a structural subsystem such as a mechanical gear by solving elastodynamic equations in generalized curvilinear coordinates. Such a capability to generate a dynamic structural response is widely applicable in a variety of structural health monitoring systems. This capability (1) will lead to an understanding of the dynamic behavior of a structural system and hence its improved design, (2) will generate a sufficiently large space of normal and damage solutions that can be used by machine learning algorithms to detect anomalous system behavior and achieve a system design optimization and (3) will lead to an optimal sensor placement strategy, based on the identification of local stress maxima all over the domain.

  2. 78 FR 1206 - Notice of Availability of Government-Owned Inventions; Available for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    .... Patent No. 8,238,924: Real-Time Optimization of Allocation of Resources//U.S. Patent No. 7,685,207: Adaptive Web-Based Asset Control System. ADDRESSES: Requests for copies of the patents cited should be...: Patent application 12/650,413: Finite State Machine Architecture for Software Development (a system for...

  3. Automating Structural Analysis of Spacecraft Vehicles

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2004-01-01

    A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  4. Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System

    NASA Technical Reports Server (NTRS)

    Lin, Tsung Han (Hank)

    2011-01-01

    JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.

  5. Evaluation method of membrane performance in membrane distillation process for seawater desalination.

    PubMed

    Chung, Seungjoon; Seo, Chang Duck; Choi, Jae-Hoon; Chung, Jinwook

    2014-01-01

    Membrane distillation (MD) is an emerging desalination technology as an energy-saving alternative to conventional distillation and reverse osmosis method. The selection of appropriate membrane is a prerequisite for the design of an optimized MD process. We proposed a simple approximation method to evaluate the performance of membranes for MD process. Three hollow fibre-type commercial membranes with different thicknesses and pore sizes were tested. Experimental results showed that one membrane was advantageous due to the highest flux, whereas another membrane was due to the lowest feed temperature drop. Regression analyses and multi-stage calculations were used to account for the trade-offeffects of flux and feed temperature drop. The most desirable membrane was selected from tested membranes in terms of the mean flux in a multi-stage process. This method would be useful for the selection of the membranes without complicated simulation techniques.

  6. Multi-stage fuel cell system method and apparatus

    DOEpatents

    George, Thomas J.; Smith, William C.

    2000-01-01

    A high efficiency, multi-stage fuel cell system method and apparatus is provided. The fuel cell system is comprised of multiple fuel cell stages, whereby the temperatures of the fuel and oxidant gas streams and the percentage of fuel consumed in each stage are controlled to optimize fuel cell system efficiency. The stages are connected in a serial, flow-through arrangement such that the oxidant gas and fuel gas flowing through an upstream stage is conducted directly into the next adjacent downstream stage. The fuel cell stages are further arranged such that unspent fuel and oxidant laden gases too hot to continue within an upstream stage because of material constraints are conducted into a subsequent downstream stage which comprises a similar cell configuration, however, which is constructed from materials having a higher heat tolerance and designed to meet higher thermal demands. In addition, fuel is underutilized in each stage, resulting in a higher overall fuel cell system efficiency.

  7. Optimization of power utilization in multimobile robot foraging behavior inspired by honeybees system.

    PubMed

    Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari

    2014-01-01

    Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work.

  8. Optimization of Power Utilization in Multimobile Robot Foraging Behavior Inspired by Honeybees System

    PubMed Central

    Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari

    2014-01-01

    Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work. PMID:24949491

  9. Optimal routing and buffer allocation for a class of finite capacity queueing systems

    NASA Technical Reports Server (NTRS)

    Towsley, Don; Sparaggis, Panayotis D.; Cassandras, Christos G.

    1992-01-01

    The problem of routing jobs to K parallel queues with identical exponential servers and unequal finite buffer capacities is considered. Routing decisions are taken by a controller which has buffering space available to it and may delay routing of a customer to a queue. Using ideas from weak majorization, it is shown that the shorter nonfull queue delayed (SNQD) policy minimizes both the total number of customers in the system at any time and the number of customers that are rejected by that time. The SNQD policy always delays routing decisions as long as all servers are busy. Only when all the buffers at the controller are occupied is a customer routed to the queue with the shortest queue length that is not at capacity. Moreover, it is shown that, if a fixed number of buffers is to be distributed among the K queues, then the optimal allocation scheme is the one in which the difference between the maximum and minimum queue capacities is minimized, i.e., becomes either 0 or 1.

  10. Modeling and design optimization of adhesion between surfaces at the microscale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sylves, Kevin T.

    2008-08-01

    This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

  11. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  12. Stability of finite difference numerical simulations of acoustic logging-while-drilling with different perfectly matched layer schemes

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Tao, Guo; Shang, Xue-Feng; Fang, Xin-Ding; Burns, Daniel R.

    2013-12-01

    In acoustic logging-while-drilling (ALWD) finite difference in time domain (FDTD) simulations, large drill collar occupies, most of the fluid-filled borehole and divides the borehole fluid into two thin fluid columns (radius ˜27 mm). Fine grids and large computational models are required to model the thin fluid region between the tool and the formation. As a result, small time step and more iterations are needed, which increases the cumulative numerical error. Furthermore, due to high impedance contrast between the drill collar and fluid in the borehole (the difference is >30 times), the stability and efficiency of the perfectly matched layer (PML) scheme is critical to simulate complicated wave modes accurately. In this paper, we compared four different PML implementations in a staggered grid finite difference in time domain (FDTD) in the ALWD simulation, including field-splitting PML (SPML), multiaxial PML(MPML), non-splitting PML (NPML), and complex frequency-shifted PML (CFS-PML). The comparison indicated that NPML and CFS-PML can absorb the guided wave reflection from the computational boundaries more efficiently than SPML and M-PML. For large simulation time, SPML, M-PML, and NPML are numerically unstable. However, the stability of M-PML can be improved further to some extent. Based on the analysis, we proposed that the CFS-PML method is used in FDTD to eliminate the numerical instability and to improve the efficiency of absorption in the PML layers for LWD modeling. The optimal values of CFS-PML parameters in the LWD simulation were investigated based on thousands of 3D simulations. For typical LWD cases, the best maximum value of the quadratic damping profile was obtained using one d 0. The optimal parameter space for the maximum value of the linear frequency-shifted factor ( α 0) and the scaling factor ( β 0) depended on the thickness of the PML layer. For typical formations, if the PML thickness is 10 grid points, the global error can be reduced to <1% using the optimal PML parameters, and the error will decrease as the PML thickness increases.

  13. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  14. Numerical analysis of three-dimensional viscous internal flows

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Yokota, Jeffrey W.

    1988-01-01

    A 3-D Navier-Stokes code has been developed for analysis of turbomachinery blade rows and other internal flows. The Navier-Stokes equations are written in a Cartesian coordinate system rotating about the x-axis, and then mapped to a general body-fitted coordinate system. Streamwise viscous terms are neglected using the thin-layer assumption, and turbulence effects are modeled using the Baldwin-Lomax turbulence model. The equations are discretized using finite differences on stacked C-type grids and are solved using a multistage Runge-Kutta algorithm with a spatially-varying time step and implicit residual smoothing. Calculations have been made of a horseshoe vortex formed in front of a flat plate with a round leading edge standing in a turbulent endwall boundary layer. Comparisons are made with experimental data taken by Eckerle and Langston for a circular cylinder under similar conditions. Computer and measured results are compared in terms of endwall flow visualization pictures and total pressure loss contours and vector plots on the symmetry plane. Calculated details of the primary vortex show excellent agreement with the experimental data. The calculations also show a small secondary vortex not seen experimentally.

  15. Algorithms for Maneuvering Spacecraft Around Small Bodies

    NASA Technical Reports Server (NTRS)

    Acikmese, A. Bechet; Bayard, David

    2006-01-01

    A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.

  16. Optimal control of energy extraction in LES of large wind farms

    NASA Astrophysics Data System (ADS)

    Meyers, Johan; Goit, Jay; Munters, Wim

    2014-11-01

    We investigate the use of optimal control combined with Large-Eddy Simulations (LES) of wind-farm boundary layer interaction for the increase of total energy extraction in very large ``infinite'' wind farms and in finite farms. We consider the individual wind turbines as flow actuators, whose energy extraction can be dynamically regulated in time so as to optimally influence the turbulent flow field, maximizing the wind farm power. For the simulation of wind-farm boundary layers we use large-eddy simulations in combination with an actuator-disk representation of wind turbines. Simulations are performed in our in-house pseudo-spectral code SP-Wind. For the optimal control study, we consider the dynamic control of turbine-thrust coefficients in the actuator-disk model. They represent the effect of turbine blades that can actively pitch in time, changing the lift- and drag coefficients of the turbine blades. In a first infinite wind-farm case, we find that farm power is increases by approximately 16% over one hour of operation. This comes at the cost of a deceleration of the outer layer of the boundary layer. A detailed analysis of energy balances is presented, and a comparison is made between infinite and finite farm cases, for which boundary layer entrainment plays an import role. The authors acknowledge support from the European Research Council (FP7-Ideas, Grant No. 306471). Simulations were performed on the computing infrastructure of the VSC Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Govern.

  17. Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization

    NASA Astrophysics Data System (ADS)

    Golari, Mehdi

    Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.

  18. Finite-Dimensional Representations for Controlled Diffusions with Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Federico, Salvatore, E-mail: salvatore.federico@unimi.it; Tankov, Peter, E-mail: tankov@math.univ-paris-diderot.fr

    2015-02-15

    We study stochastic delay differential equations (SDDE) where the coefficients depend on the moving averages of the state process. As a first contribution, we provide sufficient conditions under which the solution of the SDDE and a linear path functional of it admit a finite-dimensional Markovian representation. As a second contribution, we show how approximate finite-dimensional Markovian representations may be constructed when these conditions are not satisfied, and provide an estimate of the error corresponding to these approximations. These results are applied to optimal control and optimal stopping problems for stochastic systems with delay.

  19. Development of JSTAMP-Works/NV and HYSTAMP for Multipurpose Multistage Sheet Metal Forming Simulation

    NASA Astrophysics Data System (ADS)

    Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu

    2005-08-01

    Since 1996, Japan Research Institute Limited (JRI) has been providing a sheet metal forming simulation system called JSTAMP-Works packaged the FEM solvers of LS-DYNA and JOH/NIKE, which might be the first multistage system at that time and has been enjoying good reputation among users in Japan. To match the recent needs, "faster, more accurate and easier", of process designers and CAE engineers, a new metal forming simulation system JSTAMP-Works/NV is developed. The JSTAMP-Works/NV packaged the automatic healing function of CAD and had much more new capabilities such as prediction of 3D trimming lines for flanging or hemming, remote control of solver execution for multi-stage forming processes and shape evaluation between FEM and CAD. On the other way, a multi-stage multi-purpose inverse FEM solver HYSTAMP is developed and will be soon put into market, which is approved to be very fast, quite accurate and robust. Lastly, authors will give some application examples of user defined ductile damage subroutine in LS-DYNA for the estimation of material failure and springback in metal forming simulation.

  20. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  1. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  2. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  3. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  4. Analysis and optimization of hybrid excitation permanent magnet synchronous generator for stand-alone power system

    NASA Astrophysics Data System (ADS)

    Wang, Huijun; Qu, Zheng; Tang, Shaofei; Pang, Mingqi; Zhang, Mingju

    2017-08-01

    In this paper, electromagnetic design and permanent magnet shape optimization for permanent magnet synchronous generator with hybrid excitation are investigated. Based on generator structure and principle, design outline is presented for obtaining high efficiency and low voltage fluctuation. In order to realize rapid design, equivalent magnetic circuits for permanent magnet and iron poles are developed. At the same time, finite element analysis is employed. Furthermore, by means of design of experiment (DOE) method, permanent magnet is optimized to reduce voltage waveform distortion. Finally, the validity of proposed design methods is validated by the analytical and experimental results.

  5. Multiscale Concrete Modeling of Aging Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammi, Yousseff; Gullett, Philipp; Horstemeyer, Mark F.

    In this work a numerical finite element framework is implemented to enable the integration of coupled multiscale and multiphysics transport processes. A User Element subroutine (UEL) in Abaqus is used to simultaneously solve stress equilibrium, heat conduction, and multiple diffusion equations for 2D and 3D linear and quadratic elements. Transport processes in concrete structures and their degradation mechanisms are presented along with the discretization of the governing equations. The multiphysics modeling framework is theoretically extended to the linear elastic fracture mechanics (LEFM) by introducing the eXtended Finite Element Method (XFEM) and based on the XFEM user element implementation of Ginermore » et al. [2009]. A damage model that takes into account the damage contribution from the different degradation mechanisms is theoretically developed. The total contribution of damage is forwarded to a Multi-Stage Fatigue (MSF) model to enable the assessment of the fatigue life and the deterioration of reinforced concrete structures in a nuclear power plant. Finally, two examples are presented to illustrate the developed multiphysics user element implementation and the XFEM implementation of Giner et al. [2009].« less

  6. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    PubMed

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  7. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  8. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  9. Finite Element Modeling, Simulation, Tools, and Capabilities at Superform

    NASA Astrophysics Data System (ADS)

    Raman, Hari; Barnes, A. J.

    2010-06-01

    Over the past thirty years Superform has been a pioneer in the SPF arena, having developed a keen understanding of the process and a range of unique forming techniques to meet varying market needs. Superform’s high-profile list of customers includes Boeing, Airbus, Aston Martin, Ford, and Rolls Royce. One of the more recent additions to Superform’s technical know-how is finite element modeling and simulation. Finite element modeling is a powerful numerical technique which when applied to SPF provides a host of benefits including accurate prediction of strain levels in a part, presence of wrinkles and predicting pressure cycles optimized for time and part thickness. This paper outlines a brief history of finite element modeling applied to SPF and then reviews some of the modeling tools and techniques that Superform have applied and continue to do so to successfully superplastically form complex-shaped parts. The advantages of employing modeling at the design stage are discussed and illustrated with real-world examples.

  10. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  11. Lattice study of finite volume effect in HVP for muon g-2

    NASA Astrophysics Data System (ADS)

    Izubuchi, Taku; Kuramashi, Yoshinobu; Lehner, Christoph; Shintani, Eigo

    2018-03-01

    We study the finite volume effect of the hadronic vacuum polarization contribution to muon g-2, aμhvp, in lattice QCD by comparison with two different volumes, L4 = (5.4)4 and (8.1)4 fm4, at physical pion. We perform the lattice computation of highly precise vector-vector current correlator with optimized AMA technique on Nf = 2 + 1 PACS gauge configurations in Wilson-clover fermion and stout smeared gluon action at one lattice cut-off, a-1 = 2.33 GeV. We compare two integrals of aμhvp, momentum integral and time-slice summation, on the lattice and numerically show that the different size of finite volume effect appears between two methods. We also discuss the effect of backward-state propagation into the result of aμhvp with the different boundary condition. Our model-independent study suggest that the lattice computation at physical pion is important for correct estimate of finite volume and other lattice systematics in aμhvp.

  12. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  13. Finite element design procedure for correcting the coining die profiles

    NASA Astrophysics Data System (ADS)

    Alexandrino, Paulo; Leitão, Paulo J.; Alves, Luis M.; Martins, Paulo A. F.

    2018-05-01

    This paper presents a new finite element based design procedure for correcting the coining die profiles in order to optimize the distribution of pressure and the alignment of the resultant vertical force at the end of the die stroke. The procedure avoids time consuming and costly try-outs, does not interfere with the creative process of the sculptors and extends the service life of the coining dies by significantly decreasing the applied pressure and bending moments. The numerical simulations were carried out in a computer program based on the finite element flow formulation that is currently being developed by the authors in collaboration with the Portuguese Mint. A new experimental procedure based on the stack compression test is also proposed for determining the stress-strain curve of the materials directly from the coin blanks.

  14. Coordinated Search for a Random Walk Target Motion

    NASA Astrophysics Data System (ADS)

    El-Hadidy, Mohamed Abd Allah; Abou-Gabal, Hamdy M.

    This paper presents the cooperation between two searchers at the origin to find a Random Walk moving target on the real line. No information is not available about the target’s position all the time. Rather than finding the conditions that make the expected value of the first meeting time between one of the searchers and the target is finite, we show the existence of the optimal search strategy which minimizes this first meeting time. The effectiveness of this model is illustrated using a numerical example.

  15. Power of a Finite Speed Carnot Engine

    ERIC Educational Resources Information Center

    Agrawal, D. C.; Menon, V. J.

    2009-01-01

    A model of an endoreversible Carnot engine is considered where the piston moves with a constant speed "u." Expressions for the cycle time [tau] for the four branches, as well as output power, P[subscript W], are derived and the optimized root for maximum power is obtained in closed form. Our results are discussed in terms of the isothermal…

  16. Unified control/structure design and modeling research

    NASA Technical Reports Server (NTRS)

    Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.

    1986-01-01

    To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.

  17. Finite-dimensional compensators for infinite-dimensional systems via Galerkin-type approximation

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1990-01-01

    In this paper existence and construction of stabilizing compensators for linear time-invariant systems defined on Hilbert spaces are discussed. An existence result is established using Galkerin-type approximations in which independent basis elements are used instead of the complete set of eigenvectors. A design procedure based on approximate solutions of the optimal regulator and optimal observer via Galerkin-type approximation is given and the Schumacher approach is used to reduce the dimension of compensators. A detailed discussion for parabolic and hereditary differential systems is included.

  18. Safe-trajectory optimization and tracking control in ultra-close proximity to a failed satellite

    NASA Astrophysics Data System (ADS)

    Zhang, Jingrui; Chu, Xiaoyu; Zhang, Yao; Hu, Quan; Zhai, Guang; Li, Yanyan

    2018-03-01

    This paper presents a trajectory-optimization method for a chaser spacecraft operating in ultra-close proximity to a failed satellite. Based on the combination of active and passive trajectory protection, the constraints in the optimization framework are formulated for collision avoidance and successful docking in the presence of any thruster failure. The constraints are then handled by an adaptive Gauss pseudospectral method, in which the dynamic residuals are used as the metric to determine the distribution of collocation points. A finite-time feedback control is further employed in tracking the optimized trajectory. In particular, the stability and convergence of the controller are proved. Numerical results are given to demonstrate the effectiveness of the proposed methods.

  19. Second order tensor finite element

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, J.; Berry, C.; Tworzydlo, W.; Vadaketh, S.; Bass, J.

    1990-01-01

    The results of a research and software development effort are presented for the finite element modeling of the static and dynamic behavior of anisotropic materials, with emphasis on single crystal alloys. Various versions of two dimensional and three dimensional hybrid finite elements were implemented and compared with displacement-based elements. Both static and dynamic cases are considered. The hybrid elements developed in the project were incorporated into the SPAR finite element code. In an extension of the first phase of the project, optimization of experimental tests for anisotropic materials was addressed. In particular, the problem of calculating material properties from tensile tests and of calculating stresses from strain measurements were considered. For both cases, numerical procedures and software for the optimization of strain gauge and material axes orientation were developed.

  20. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  1. Two-warehouse system for non-instantaneous deterioration products with promotional effort and inflation over a finite time horizon

    NASA Astrophysics Data System (ADS)

    Palanivel, M.; Priyan, S.; Mala, P.

    2017-11-01

    In the current global market, organizations use many promotional tools to increase their sales. One such tool is sales teams' initiatives or promotional policies, i.e., free gifts, discounts, packaging, etc. This phenomenon motivates the retailer/or buyer to order a large inventory lot so as to take full benefit of promotional policies. In view of this the present paper considers a two-warehouse (owned and rented) inventory problem for a non-instantaneous deteriorating item with inflation and time value of money over a finite planning horizon. Here, demand depends on the sales team's initiatives and shortages are partially backlogged at a rate dependent on the duration of waiting time up to the arrival of next lot. We design an algorithm to obtain the optimal replenishment strategies. Numerical analysis is also given to show the applicability of the proposed model in real-world two-warehouse inventory problems.

  2. CUDA Fortran acceleration for the finite-difference time-domain method

    NASA Astrophysics Data System (ADS)

    Hadi, Mohammed F.; Esmaeili, Seyed A.

    2013-05-01

    A detailed description of programming the three-dimensional finite-difference time-domain (FDTD) method to run on graphical processing units (GPUs) using CUDA Fortran is presented. Two FDTD-to-CUDA thread-block mapping designs are investigated and their performances compared. Comparative assessment of trade-offs between GPU's shared memory and L1 cache is also discussed. This presentation is for the benefit of FDTD programmers who work exclusively with Fortran and are reluctant to port their codes to C in order to utilize GPU computing. The derived CUDA Fortran code is compared with an optimized CPU version that runs on a workstation-class CPU to present a realistic GPU to CPU run time comparison and thus help in making better informed investment decisions on FDTD code redesigns and equipment upgrades. All analyses are mirrored with CUDA C simulations to put in perspective the present state of CUDA Fortran development.

  3. Optimization of Support Vector Machine (SVM) for Object Classification

    NASA Technical Reports Server (NTRS)

    Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.

  4. Multi-stage learning aids applied to hands-on software training.

    PubMed

    Rother, Kristian; Rother, Magdalena; Pleus, Alexandra; Upmeier zu Belzen, Annette

    2010-11-01

    Delivering hands-on tutorials on bioinformatics software and web applications is a challenging didactic scenario. The main reason is that trainees have heterogeneous backgrounds, different previous knowledge and vary in learning speed. In this article, we demonstrate how multi-stage learning aids can be used to allow all trainees to progress at a similar speed. In this technique, the trainees can utilize cards with hints and answers to guide themselves self-dependently through a complex task. We have successfully conducted a tutorial for the molecular viewer PyMOL using two sets of learning aid cards. The trainees responded positively, were able to complete the task, and the trainer had spare time to respond to individual questions. This encourages us to conclude that multi-stage learning aids overcome many disadvantages of established forms of hands-on software training.

  5. A Linear Electromagnetic Piston Pump

    NASA Astrophysics Data System (ADS)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  6. Finite Element Analysis and Optimization of Flexure Bearing for Linear Motor Compressor

    NASA Astrophysics Data System (ADS)

    Khot, Maruti; Gawali, Bajirao

    Nowadays linear motor compressors are commonly used in miniature cryocoolers instead of rotary compressors because rotary compressors apply large radial forces to the piston, which provide no useful work, cause large amount of wear and usually require lubrication. Recent trends favour flexure supported configurations for long life. The present work aims at designing and geometrical optimization of flexure bearings using finite element analysis and the development of design charts for selection purposes. The work also covers the manufacturing of flexures using different materials and the validation of the experimental finite element analysis results.

  7. Multirate sampled-data yaw-damper and modal suppression system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1990-01-01

    A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.

  8. The optimal manufacturing batch size with rework under time-varying demand process for a finite time horizon

    NASA Astrophysics Data System (ADS)

    Musa, Sarah; Supadi, Siti Suzlin; Omar, Mohd

    2014-07-01

    Rework is one of the solutions to some of the main issues in reverse logistic and green supply chain as it reduces production cost and environmental problem. Many researchers focus on developing rework model, but to the knowledge of the author, none of them has developed a model for time-varying demand rate. In this paper, we extend previous works and develop multiple batch production system for time-varying demand rate with rework. In this model, the rework is done within the same production cycle.

  9. Guidance and flight control law development for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Markopoulos, N.

    1993-01-01

    During the third reporting period our efforts were focused on a reformulation of the optimal control problem involving active state-variable inequality constraints. In the reformulated problem the optimization is carried out not with respect to all controllers, but only with respect to asymptotic controllers leading to the state constraint boundary. Intimately connected with the traditional formulation is the fact that when the reduced solution for such problems lies on a state constraint boundary, the corresponding boundary layer transitions are of finite time in the stretched time scale. Thus, it has been impossible so far to apply the classical asymptotic boundary layer theory to such problems. Moreover, the traditional formulation leads to optimal controllers that are one-sided, that is, they break down when a disturbance throws the system on the prohibited side of the state constraint boundary.

  10. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  11. Numerical simulation of conservation laws

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1992-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.

  12. Multiple-copy state discrimination: Thinking globally, acting locally

    NASA Astrophysics Data System (ADS)

    Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.

    2011-05-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  13. Multiple-copy state discrimination: Thinking globally, acting locally

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.

    2011-05-15

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less

  14. A Multi-Stage Optimization Model for Air Force Reserve Officer Training Corps Officer Candidate Selection

    DTIC Science & Technology

    2012-03-01

    HSSP), the In-College Schol - arship Program (ICSP), and the Enlisted Commissioning Program (ECP) [1]. The 5 entire scholarship program is managed by...and for which they are interested in volunteering. AFROTC is currently interested in developing techniques to better allocate schol - arships and...institutions are also concerned with ensuring that they enroll the most qualified students into their programs. Camarena-Anthony [8] examines schol - arship

  15. Development of a Multistage Reliability-Based Design Optimization Method

    DTIC Science & Technology

    2014-01-01

    expressed using Eq. (7), where nx is the number of design variables P a0 þ Xnx i¼1 aini þ bið Þxi 0 " # a (7) Figures 3(a)–3(c) illustrates the...constraint equation can be expressed in the gen- eral form of Eq. (16), where again nx is the number of design variables P a0;j þ Xnx i¼1 ai;jnixi

  16. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  17. Thermomechanical analysis of fast-burst reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.D.

    1994-08-01

    Fast-burst reactors are designed to provide intense, short-duration pulses of neutrons. The fission reaction also produces extreme time-dependent heating of the nuclear fuel. An existing transient-dynamic finite element code was modified specifically to compute the time-dependent stresses and displacements due to thermal shock loads of reactors. Thermomechanical analysis was then applied to determine structural feasibility of various concepts for an EDNA-type reactor and to optimize the mechanical design of the new SPR III-M reactor.

  18. Optimal Alignment of Structures for Finite and Periodic Systems.

    PubMed

    Griffiths, Matthew; Niblett, Samuel P; Wales, David J

    2017-10-10

    Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.

  19. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  20. Temperature Scaling Law for Quantum Annealing Optimizers.

    PubMed

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  1. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  2. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  3. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Astrophysics Data System (ADS)

    1992-04-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  4. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  5. A predictive control framework for optimal energy extraction of wind farms

    NASA Astrophysics Data System (ADS)

    Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.

    2016-09-01

    This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.

  6. Carnot cycle at finite power: attainability of maximal efficiency.

    PubMed

    Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G

    2013-08-02

    We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.

  7. Linear finite-difference bond graph model of an ionic polymer actuator

    NASA Astrophysics Data System (ADS)

    Bentefrit, M.; Grondel, S.; Soyer, C.; Fannir, A.; Cattan, E.; Madden, J. D.; Nguyen, T. M. G.; Plesse, C.; Vidal, F.

    2017-09-01

    With the recent growing interest for soft actuation, many new types of ionic polymers working in air have been developed. Due to the interrelated mechanical, electrical, and chemical properties which greatly influence the characteristics of such actuators, their behavior is complex and difficult to understand, predict and optimize. In light of this challenge, an original linear multiphysics finite difference bond graph model was derived to characterize this ionic actuation. This finite difference scheme was divided into two coupled subparts, each related to a specific physical, electrochemical or mechanical domain, and then converted into a bond graph model as this language is particularly suited for systems from multiple energy domains. Simulations were then conducted and a good agreement with the experimental results was obtained. Furthermore, an analysis of the power efficiency of such actuators as a function of space and time was proposed and allowed to evaluate their performance.

  8. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  9. Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Corban, J. E.

    1990-01-01

    The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.

  10. Multi-objective aerodynamic shape optimization of small livestock trailers

    NASA Astrophysics Data System (ADS)

    Gilkeson, C. A.; Toropov, V. V.; Thompson, H. M.; Wilson, M. C. T.; Foxley, N. A.; Gaskell, P. H.

    2013-11-01

    This article presents a formal optimization study of the design of small livestock trailers, within which the majority of animals are transported to market in the UK. The benefits of employing a headboard fairing to reduce aerodynamic drag without compromising the ventilation of the animals' microclimate are investigated using a multi-stage process involving computational fluid dynamics (CFD), optimal Latin hypercube (OLH) design of experiments (DoE) and moving least squares (MLS) metamodels. Fairings are parameterized in terms of three design variables and CFD solutions are obtained at 50 permutations of design variables. Both global and local search methods are employed to locate the global minimum from metamodels of the objective functions and a Pareto front is generated. The importance of carefully selecting an objective function is demonstrated and optimal fairing designs, offering drag reductions in excess of 5% without compromising animal ventilation, are presented.

  11. Optimization for Guitar Fingering on Single Notes

    NASA Astrophysics Data System (ADS)

    Itoh, Masaru; Hayashida, Takumi

    This paper presents an optimization method for guitar fingering. The fingering is to determine a unique combination of string, fret and finger corresponding to the note. The method aims to generate the best fingering pattern for guitar robots rather than beginners. Furthermore, it can be applied to any musical score on single notes. A fingering action can be decomposed into three motions, that is, a motion of press string, release string and move fretting hand. The cost for moving the hand is estimated on the basis of Manhattan distance which is the sum of distances along fret and string directions. The objective is to minimize the total fingering costs, subject to fret, string and finger constraints. As a sequence of notes on the score forms a line on time series, the optimization for guitar fingering can be resolved into a multistage decision problem. Dynamic programming is exceedingly effective to solve such a problem. A level concept is introduced into rendering states so as to make multiple DP solutions lead a unique one among the DP backward processes. For example, if two fingerings have the same value of cost at different states on a stage, then the low position would be taken precedence over the high position, and the index finger would be over the middle finger.

  12. Dynamic analysis of concentrated solar supercritical CO2-based power generation closed-loop cycle

    DOE PAGES

    Osorio, Julian D.; Hovsapian, Rob; Ordonez, Juan C.

    2016-01-01

    Here, the dynamic behavior of a concentrated solar power (CSP) supercritical CO 2 cycle is studied under different seasonal conditions. The system analyzed is composed of a central receiver, hot and cold thermal energy storage units, a heat exchanger, a recuperator, and multi-stage compression-expansion subsystems with intercoolers and reheaters between compressors and turbines respectively. Energy models for each component of the system are developed in order to optimize operating and design parameters such as mass flow rate, intermediate pressures and the effective area of the recuperator to lead to maximum efficiency. Our results show that the parametric optimization leads themore » system to a process efficiency of about 21 % and a maximum power output close to 1.5 MW. The thermal energy storage allows the system to operate for several hours after sunset. This operating time is approximately increased from 220 to 480 minutes after optimization. The hot and cold thermal energy storage also lessens the temperature fluctuations by providing smooth changes of temperatures at the turbines and compressors inlets. Our results indicate that concentrated solar systems using supercritical CO 2 could be a viable alternative to satisfying energy needs in desert areas with scarce water and fossil fuel resources.« less

  13. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  14. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  15. General theory of the multistage geminate reactions of the isolated pairs of reactants. II. Detailed balance and universal asymptotes of kinetics.

    PubMed

    Kipriyanov, Alexey A; Doktorov, Alexander B

    2014-10-14

    The analysis of general (matrix) kinetic equations for the mean survival probabilities of any of the species in a sample (or mean concentrations) has been made for a wide class of the multistage geminate reactions of the isolated pairs. These kinetic equations (obtained in the frame of the kinetic approach based on the concept of "effective" particles in Paper I) take into account various possible elementary reactions (stages of a multistage reaction) excluding monomolecular, but including physical and chemical processes of the change in internal quantum states carried out with the isolated pairs of reactants (or isolated reactants). The general basic principles of total and detailed balance have been established. The behavior of the reacting system has been considered on macroscopic time scales, and the universal long-term kinetics has been determined.

  16. Optimal stimulus scheduling for active estimation of evoked brain networks.

    PubMed

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  17. Optimal stimulus scheduling for active estimation of evoked brain networks

    NASA Astrophysics Data System (ADS)

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  18. Rapid Growth of Large Single-Crystalline Graphene via Second Passivation and Multistage Carbon Supply.

    PubMed

    Lin, Li; Sun, Luzhao; Zhang, Jincan; Sun, Jingyu; Koh, Ai Leen; Peng, Hailin; Liu, Zhongfan

    2016-06-01

    A second passivation and a multistage carbon-source supply (CSS) allow a 50-fold enhancement of the growth rate of large single-crystalline graphene with a record growth rate of 101 μm min(-1) , almost 10 times higher than for pure copper. To this end the CSS is tailored at separate stages of graphene growth on copper foil, combined with an effective suppression of new spontaneous nucleation via second passivation. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less

  20. A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-10-04

    We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less

  1. Semi-analytic valuation of stock loans with finite maturity

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoping; Putri, Endah R. M.

    2015-10-01

    In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.

  2. A Discontinuous Galerkin Method for Parabolic Problems with Modified hp-Finite Element Approximation Technique

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.

  3. A majorized Newton-CG augmented Lagrangian-based finite element method for 3D restoration of geological models

    NASA Astrophysics Data System (ADS)

    Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia

    2016-04-01

    In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.

  4. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  5. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  6. The continuous adjoint approach to the k-ε turbulence model for shape optimization and optimal active control of turbulent flows

    NASA Astrophysics Data System (ADS)

    Papoutsis-Kiachagias, E. M.; Zymaris, A. S.; Kavvadias, I. S.; Papadimitriou, D. I.; Giannakoglou, K. C.

    2015-03-01

    The continuous adjoint to the incompressible Reynolds-averaged Navier-Stokes equations coupled with the low Reynolds number Launder-Sharma k-ε turbulence model is presented. Both shape and active flow control optimization problems in fluid mechanics are considered, aiming at minimum viscous losses. In contrast to the frequently used assumption of frozen turbulence, the adjoint to the turbulence model equations together with appropriate boundary conditions are derived, discretized and solved. This is the first time that the adjoint equations to the Launder-Sharma k-ε model have been derived. Compared to the formulation that neglects turbulence variations, the impact of additional terms and equations is evaluated. Sensitivities computed using direct differentiation and/or finite differences are used for comparative purposes. To demonstrate the need for formulating and solving the adjoint to the turbulence model equations, instead of merely relying upon the 'frozen turbulence assumption', the gain in the optimization turnaround time offered by the proposed method is quantified.

  7. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  8. Optimal timing in biological processes

    USGS Publications Warehouse

    Williams, B.K.; Nichols, J.D.

    1984-01-01

    A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.

  9. MSW Time to Tumor Model and Supporting Documentation

    EPA Science Inventory

    The multistage Weibull (MSW) time-to-tumor model and related documentation were developed principally (but not exclusively) for conducting time-to-tumor analyses to support risk assessments under the IRIS program. These programs and related docum...

  10. Empirical evidence for resource-rational anchoring and adjustment.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D

    2018-04-01

    People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

  11. Optimal management strategies in variable environments: Stochastic optimal control methods

    USGS Publications Warehouse

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.

  12. Perfect coupling of light to a periodic dielectric/metal/dielectric structure

    NASA Astrophysics Data System (ADS)

    Wang, Zhengling; Li, Shiqiang; Chang, R. P. H.; Ketterson, John B.

    2014-07-01

    Using the finite difference time domain method, it is demonstrated that perfect coupling can be achieved between normally incident light and a periodic dielectric/metal/dielectric structure. The structure serves as a diffraction grating that excites modes related to the long range surface plasmon and short range surface plasmon modes that propagate on continuous metallic films. By optimizing the structural dimensions, perfect coupling is achieved between the incident light and these modes. A high Q of 697 and an accompanying ultrasharp linewidth of 0.8 nm are predicted for a 10 nm silver film for optimal conditions.

  13. Robust optimization of front members in a full frontal car impact

    NASA Astrophysics Data System (ADS)

    Aspenberg (né Lönn), David; Jergeus, Johan; Nilsson, Larsgunnar

    2013-03-01

    In the search for lightweight automobile designs, it is necessary to assure that robust crashworthiness performance is achieved. Structures that are optimized to handle a finite number of load cases may perform poorly when subjected to various dispersions. Thus, uncertainties must be accounted for in the optimization process. This article presents an approach to optimization where all design evaluations include an evaluation of the robustness. Metamodel approximations are applied both to the design space and the robustness evaluations, using artifical neural networks and polynomials, respectively. The features of the robust optimization approach are displayed in an analytical example, and further demonstrated in a large-scale design example of front side members of a car. Different optimization formulations are applied and it is shown that the proposed approach works well. It is also concluded that a robust optimization puts higher demands on the finite element model performance than normally.

  14. Multi-stage ranking of emergency technology alternatives for water source pollution accidents using a fuzzy group decision making tool.

    PubMed

    Qu, Jianhua; Meng, Xianlin; You, Hong

    2016-06-05

    Due to the increasing number of unexpected water source pollution events, selection of the most appropriate disposal technology for a specific pollution scenario is of crucial importance to the security of urban water supplies. However, the formulation of the optimum option is considerably difficult owing to the substantial uncertainty of such accidents. In this research, a multi-stage technical screening and evaluation tool is proposed to determine the optimal technique scheme, considering the areas of pollutant elimination both in drinking water sources and water treatment plants. In stage 1, a CBR-based group decision tool was developed to screen available technologies for different scenarios. Then, the threat degree caused by the pollution was estimated in stage 2 using a threat evaluation system and was partitioned into four levels. For each threat level, a corresponding set of technique evaluation criteria weights was obtained using Group-G1. To identify the optimization alternatives corresponding to the different threat levels, an extension of TOPSIS, a multi-criteria interval-valued trapezoidal fuzzy decision making technique containing the four arrays of criteria weights, to a group decision environment was investigated in stage 3. The effectiveness of the developed tool was elaborated by two actual thallium-contaminated scenarios associated with different threat levels. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Mixed Integer PDE Constrained Optimization for the Control of a Wildfire Hazard

    DTIC Science & Technology

    2017-01-01

    are nodes suitable for extinguishing the fire. We introduce a discretization of the time horizon [0, T] by the set of time T := {0, At,..., ntZ\\t = T...of the constraints and objective with a discrete counterpart. The PDE is replaced by a linear system obtained from a convergent finite difference...method [5] and the integral is replaced by a quadrature formula. The domain is discretized by replacing 17 with an equidistant grid of length Ax

  16. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  17. Output-Feedback Control of Unknown Linear Discrete-Time Systems With Stochastic Measurement and Process Noise via Approximate Dynamic Programming.

    PubMed

    Wang, Jun-Sheng; Yang, Guang-Hong

    2017-07-25

    This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.

  18. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S.more » These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].« less

  19. Multichannel-Sensing Scheduling and Transmission-Energy Optimizing in Cognitive Radio Networks with Energy Harvesting.

    PubMed

    Hoan, Tran-Nhut-Khai; Hiep, Vu-Van; Koo, In-Soo

    2016-03-31

    This paper considers cognitive radio networks (CRNs) utilizing multiple time-slotted primary channels in which cognitive users (CUs) are powered by energy harvesters. The CUs are under the consideration that hardware constraints on radio devices only allow them to sense and transmit on one channel at a time. For a scenario where the arrival of harvested energy packets and the battery capacity are finite, we propose a scheme to optimize (i) the channel-sensing schedule (consisting of finding the optimal action (silent or active) and sensing order of channels) and (ii) the optimal transmission energy set corresponding to the channels in the sensing order for the operation of the CU in order to maximize the expected throughput of the CRN over multiple time slots. Frequency-switching delay, energy-switching cost, correlation in spectrum occupancy across time and frequency and errors in spectrum sensing are also considered in this work. The performance of the proposed scheme is evaluated via simulation. The simulation results show that the throughput of the proposed scheme is greatly improved, in comparison to related schemes in the literature. The collision ratio on the primary channels is also investigated.

  20. A viscoelastic model for dielectric elastomers based on a continuum mechanical formulation and its finite element implementation

    NASA Astrophysics Data System (ADS)

    Bueschel, A.; Klinkel, S.; Wagner, W.

    2011-04-01

    Smart materials are active and multifunctional materials, which play an important part for sensor and actuator applications. These materials have the potential to transform passive structures into adaptive systems. However, a prerequisite for the design and the optimization of these materials is, that reliable models exist, which incorporate the interaction between the different combinations of thermal, electrical, magnetic, optical and mechanical effects. Polymeric electroelastic materials, so-called electroactive polymer (EAP), own the characteristic to deform if an electric field is applied. EAP's possesses the benefit that they share the characteristic of polymers, these are lightweight, inexpensive, fracture tolerant, elastic, and the chemical and physical structure is well understood. However, the description "electroactive polymer" is a generic term for many kinds of different microscopic mechanisms and polymeric materials. Based on the laws of electromagnetism and elasticity, a visco-electroelastic model is developed and implemented into the finite element method (FEM). The presented three-dimensional solid element has eight nodes and trilinear interpolation functions for the displacement and the electric potential. The continuum mechanics model contains finite deformations, the time dependency and the nearly incompressible behavior of the material. To describe the possible, large time dependent deformations, a finite viscoelastic model with a split of the deformation gradient is used. Thereby the time dependent characteristic of polymeric materials is incorporated through the free energy function. The electromechanical interactions are considered by the electrostatic forces and inside the energy function.

  1. Time and frequency constrained sonar signal design for optimal detection of elastic objects.

    PubMed

    Hamschin, Brandon; Loughlin, Patrick J

    2013-04-01

    In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.

  2. An Inventory Model for Special Display Goods with Seasonal Demand

    NASA Astrophysics Data System (ADS)

    Kawakatsu, Hidefumi

    2010-10-01

    The present study discusses the retailer's optimal replenishment policy for seasonal products. The demand rate of seasonal merchandise such as clothes, sporting goods, children's toys and electrical home appearances tends to decrease with time after reaching its maximum value. In this study, we focus on "Special Display Goods", which are heaped up in end displays or special areas at retail stores. They are sold at a fast velocity when their quantity displayed is large, but are sold at a low velocity if the quantity becomes small. We develop the model with a finite time horizon (selling period) to determine the optimal replenishment policy, which maximizes the retailer's total profit. Numerical examples are presented to illustrate the theoretical underpinnings of the proposed model.

  3. Thickness optimization of auricular silicone scaffold based on finite element analysis.

    PubMed

    Jiang, Tao; Shang, Jianzhong; Tang, Li; Wang, Zhuo

    2016-01-01

    An optimized thickness of a transplantable auricular silicone scaffold was researched. The original image data were acquired from CT scans, and reverse modeling technology was used to build a digital 3D model of an auricle. The transplant process was simulated in ANSYS Workbench by finite element analysis (FEA), solid scaffolds were manufactured based on the FEA results, and the transplantable artificial auricle was finally obtained with an optimized thickness, as well as sufficient intensity and hardness. This paper provides a reference for clinical transplant surgery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Cooperative combinatorial optimization: evolutionary computation case study.

    PubMed

    Burgin, Mark; Eberbach, Eugene

    2008-01-01

    This paper presents a formalization of the notion of cooperation and competition of multiple systems that work toward a common optimization goal of the population using evolutionary computation techniques. It is proved that evolutionary algorithms are more expressive than conventional recursive algorithms, such as Turing machines. Three classes of evolutionary computations are introduced and studied: bounded finite, unbounded finite, and infinite computations. Universal evolutionary algorithms are constructed. Such properties of evolutionary algorithms as completeness, optimality, and search decidability are examined. A natural extension of evolutionary Turing machine (ETM) model is proposed to properly reflect phenomena of cooperation and competition in the whole population.

  5. Optimal Assignment Problem Applications of Finite Mathematics to Business and Economics. [and] Difference Equations with Applications. Applications of Difference Equations to Economics and Social Sciences. [and] Selected Applications of Mathematics to Finance and Investment. Applications of Elementary Algebra to Finance. [and] Force of Interest. Applications of Calculus to Finance. UMAP Units 317, 322, 381, 382.

    ERIC Educational Resources Information Center

    Gale, David; And Others

    Four units make up the contents of this document. The first examines applications of finite mathematics to business and economies. The user is expected to learn the method of optimization in optimal assignment problems. The second module presents applications of difference equations to economics and social sciences, and shows how to: 1) interpret…

  6. Tooth shape optimization of brushless permanent magnet motors for reducing torque ripples

    NASA Astrophysics Data System (ADS)

    Hsu, Liang-Yi; Tsai, Mi-Ching

    2004-11-01

    This paper presents a tooth shape optimization method based on a generic algorithm to reduce the torque ripple of brushless permanent magnet motors under two different magnetization directions. The analysis of this design method mainly focuses on magnetic saturation and cogging torque and the computation of the optimization process is based on an equivalent magnetic network circuit. The simulation results, obtained from the finite element analysis, are used to confirm the accuracy and performance. Finite element analysis results from different tooth shapes are compared to show the effectiveness of the proposed method.

  7. Finite-Time Performance of Local Search Algorithms: Theory and Application

    DTIC Science & Technology

    2010-06-10

    security devices deployed at airport security checkpoints are used to detect prohibited items (e.g., guns, knives, explosives). Each security device...security devices are deployed, the practical issue of determining how to optimally use them can be difficult. For an airport security system design...checked baggage), explosive detection systems (designed to detect explosives in checked baggage), and detailed hand search by an airport security official

  8. Power performance of nonisentropic Brayton cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C.; Kiang, R.L.

    In this paper work and power optimization of a Brayton cycle are analyzed with a finite-time heat transfer analysis. This work extends the recent flurry of publications in heat engine efficiency under the maximum power condition by incorporating nonisentropic compression and expansion. As expected, these nonisentropic processes lower the power output as well as the cycle efficiency when compared with an endoreversible Brayton cycle under the same conditions.

  9. Tapered holey fibers for spot-size and numerical-aperture conversion.

    PubMed

    Town, G E; Lizier, J T

    2001-07-15

    Adiabatically tapered holey fibers are shown to be potentially useful for guided-wave spot-size and numerical-aperture conversion. Conditions for adiabaticity and design guidelines are provided in terms of the effective-index model. We also present finite-difference time-domain calculations of downtapered holey fiber, showing that large spot-size conversion factors are obtainable with minimal loss by use of short, optimally shaped tapers.

  10. Numerical simulation of rotating stall and surge alleviation in axial compressors

    NASA Astrophysics Data System (ADS)

    Niazi, Saeid

    Axial compression systems are widely used in many aerodynamic applications. However, the operability of such systems is limited at low-mass flow rates by fluid dynamic instabilities. These instabilities lead the compressor to rotating stall or surge. In some instances, a combination of rotating stall and surge, called modified surge, has also been observed. Experimental and computational methods are two approaches for investigating these adverse aerodynamic phenomena. In this study, numerical investigations have been performed to study these phenomena, and to develop control strategies for alleviation of rotating stall and surge. A three-dimensional unsteady Navier-Stokes analysis capable of modeling multistage turbomachinery components has been developed. This method uses a finite volume approach that is third order accurate in space, and first or second order in time. The scheme is implicit in time, permitting the use of large time steps. A one-equation Spalart-Allmaras model is used to model the effects of turbulence. The analysis is cast in a very general form so that a variety of configurations---centrifugal compressors and multistage compressors---may be analyzed with minor modifications to the analysis. Calculations have been done both at design and off-design conditions for an axial compressor tested at NASA Glenn Research Center. At off-design conditions the calculations show that the tip leakage flow becomes strong, and its interaction with the tip shock leads to compressor rotating stall and modified surge. Both global variations to the mass flow rate, associated with surge, and azimuthal variations in flow conditions indicative of rotating stall, were observed. It is demonstrated that these adverse phenomena may be eliminated, and stable operation restored, by the use of bleed valves located on the diffuser walls. Two types of controls were examined: open-loop and closed-loop. In the open-loop case mass is removed at a fixed, preset rate from the diffuser. In the closed-loop case, the rate of bleed is linked to pressure fluctuations upstream of the compressor face. The bleed valve is activated when the amplitude of pressure fluctuations sensed by the probes exceeds a certain range. Calculations show that both types of bleeding eliminate both rotating stall and modified surge, and suppress the precursor disturbances upstream of the compressor face. It is observed that smaller amounts of compressed air need to be removed with the closed-loop control, as compared to open-loop control.

  11. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  12. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  13. Electronic and geometric properties of ETS-10: QM/MM studies of cluster models.

    PubMed

    Zimmerman, Anne Marie; Doren, Douglas J; Lobo, Raul F

    2006-05-11

    Hybrid DFT/MM methods have been used to investigate the electronic and geometric properties of the microporous titanosilicate ETS-10. A comparison of finite length and periodic models demonstrates that band gap energies for ETS-10 can be well represented with relatively small cluster models. Optimization of finite clusters leads to different local geometries for bulk and end sites, where the local bulk TiO6 geometry is in good agreement with recent experimental results. Geometry optimizations reveal that any asymmetry within the axial O-Ti-O chain is negligible. The band gap in the optimized model corresponds to a O(2p) --> Tibulk(3d) transition. The results suggest that the three Ti atom, single chain, symmetric, finite cluster is an effective model for the geometric and electronic properties of bulk and end TiO6 groups in ETS-10.

  14. A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu

    We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less

  15. Reducing Delay in Diagnosis: Multistage Recommendation Tracking.

    PubMed

    Wandtke, Ben; Gallagher, Sarah

    2017-11-01

    The purpose of this study was to determine whether a multistage tracking system could improve communication between health care providers, reducing the risk of delay in diagnosis related to inconsistent communication and tracking of radiology follow-up recommendations. Unconditional recommendations for imaging follow-up of all diagnostic imaging modalities excluding mammography (n = 589) were entered into a database and tracked through a multistage tracking system for 13 months. Tracking interventions were performed for patients for whom completion of recommended follow-up imaging could not be identified 1 month after the recommendation due date. Postintervention compliance with the follow-up recommendation required examination completion or clinical closure (i.e., biopsy, limited life expectancy or death, or subspecialist referral). Baseline radiology information system checks performed 1 month after the recommendation due date revealed timely completion of 43.1% of recommended imaging studies at our institution before intervention. Three separate tracking interventions were studied, showing effectiveness between 29.0% and 57.8%. The multistage tracking system increased the examination completion rate to 70.5% (a 52% increase) and reduced the rate of unknown follow-up compliance and the associated risk of delay in diagnosis to 13.9% (a 74% decrease). Examinations completed after tracking intervention generated revenue of 4.1 times greater than the labor cost. Performing sequential radiology recommendation tracking interventions can substantially reduce the rate of unknown follow-up compliance and add value to the health system. Unknown follow-up compliance is a risk factor for delay in diagnosis, a form of preventable medical error commonly identified in malpractice claims involving radiologists and office-based practitioners.

  16. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    NASA Astrophysics Data System (ADS)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  17. Weighted finite impulse response filter for chromatic dispersion equalization in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui

    2018-01-01

    Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.

  18. Application of firefly algorithm to the dynamic model updating problem

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  19. Multistage bioassociation of uranium onto an extremely halophilic archaeon revealed by a unique combination of spectroscopic and microscopic techniques

    DOE PAGES

    Bader, Miriam; Müller, Katharina; Foerstendorf, Harald; ...

    2016-12-27

    The interactions of two extremely halophilic archaea with uranium were investigated in this paper at high ionic strength as a function of time, pH and uranium concentration. Halobacterium noricense DSM-15987 and Halobacterium sp. putatively noricense, isolated from the Waste Isolation Pilot Plant repository, were used for these investigations. The kinetics of U(VI) bioassociation with both strains showed an atypical multistage behavior, meaning that after an initial phase of U(VI) sorption, an unexpected interim period of U(VI) release was observed, followed by a slow reassociation of uranium with the cells. By applying in situ attenuated total reflection Fourier-transform infrared spectroscopy, themore » involvement of phosphoryl and carboxylate groups in U(VI) complexation during the first biosorption phase was shown. Differences in cell morphology and uranium localization become visible at different stages of the bioassociation process, as shown with scanning electron microscopy in combination with energy dispersive X-ray spectroscopy. Finally, our results demonstrate for the first time that association of uranium with the extremely halophilic archaeon is a multistage process, beginning with sorption and followed by another process, probably biomineralization.« less

  20. Multistage bioassociation of uranium onto an extremely halophilic archaeon revealed by a unique combination of spectroscopic and microscopic techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Miriam; Müller, Katharina; Foerstendorf, Harald

    The interactions of two extremely halophilic archaea with uranium were investigated in this paper at high ionic strength as a function of time, pH and uranium concentration. Halobacterium noricense DSM-15987 and Halobacterium sp. putatively noricense, isolated from the Waste Isolation Pilot Plant repository, were used for these investigations. The kinetics of U(VI) bioassociation with both strains showed an atypical multistage behavior, meaning that after an initial phase of U(VI) sorption, an unexpected interim period of U(VI) release was observed, followed by a slow reassociation of uranium with the cells. By applying in situ attenuated total reflection Fourier-transform infrared spectroscopy, themore » involvement of phosphoryl and carboxylate groups in U(VI) complexation during the first biosorption phase was shown. Differences in cell morphology and uranium localization become visible at different stages of the bioassociation process, as shown with scanning electron microscopy in combination with energy dispersive X-ray spectroscopy. Finally, our results demonstrate for the first time that association of uranium with the extremely halophilic archaeon is a multistage process, beginning with sorption and followed by another process, probably biomineralization.« less

  1. Reinforcement learning in supply chains.

    PubMed

    Valluri, Annapurna; North, Michael J; Macal, Charles M

    2009-10-01

    Effective management of supply chains creates value and can strategically position companies. In practice, human beings have been found to be both surprisingly successful and disappointingly inept at managing supply chains. The related fields of cognitive psychology and artificial intelligence have postulated a variety of potential mechanisms to explain this behavior. One of the leading candidates is reinforcement learning. This paper applies agent-based modeling to investigate the comparative behavioral consequences of three simple reinforcement learning algorithms in a multi-stage supply chain. For the first time, our findings show that the specific algorithm that is employed can have dramatic effects on the results obtained. Reinforcement learning is found to be valuable in multi-stage supply chains with several learning agents, as independent agents can learn to coordinate their behavior. However, learning in multi-stage supply chains using these postulated approaches from cognitive psychology and artificial intelligence take extremely long time periods to achieve stability which raises questions about their ability to explain behavior in real supply chains. The fact that it takes thousands of periods for agents to learn in this simple multi-agent setting provides new evidence that real world decision makers are unlikely to be using strict reinforcement learning in practice.

  2. Multistage bioassociation of uranium onto an extremely halophilic archaeon revealed by a unique combination of spectroscopic and microscopic techniques.

    PubMed

    Bader, Miriam; Müller, Katharina; Foerstendorf, Harald; Drobot, Björn; Schmidt, Matthias; Musat, Niculina; Swanson, Juliet S; Reed, Donald T; Stumpf, Thorsten; Cherkouk, Andrea

    2017-04-05

    The interactions of two extremely halophilic archaea with uranium were investigated at high ionic strength as a function of time, pH and uranium concentration. Halobacterium noricense DSM-15987 and Halobacterium sp. putatively noricense, isolated from the Waste Isolation Pilot Plant repository, were used for these investigations. The kinetics of U(VI) bioassociation with both strains showed an atypical multistage behavior, meaning that after an initial phase of U(VI) sorption, an unexpected interim period of U(VI) release was observed, followed by a slow reassociation of uranium with the cells. By applying in situ attenuated total reflection Fourier-transform infrared spectroscopy, the involvement of phosphoryl and carboxylate groups in U(VI) complexation during the first biosorption phase was shown. Differences in cell morphology and uranium localization become visible at different stages of the bioassociation process, as shown with scanning electron microscopy in combination with energy dispersive X-ray spectroscopy. Our results demonstrate for the first time that association of uranium with the extremely halophilic archaeon is a multistage process, beginning with sorption and followed by another process, probably biomineralization. Copyright © 2016. Published by Elsevier B.V.

  3. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  4. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  5. Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems

    NASA Astrophysics Data System (ADS)

    Oden, J. T.

    1992-08-01

    This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.

  6. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  7. Optimization design and analysis of the pavement planer scraper structure

    NASA Astrophysics Data System (ADS)

    Fang, Yuanbin; Sha, Hongwei; Yuan, Dajun; Xie, Xiaobing; Yang, Shibo

    2018-03-01

    By LS-DYNA, it establishes the finite element model of road milling machine scraper, and analyses the dynamic simulation. Through the optimization of the scraper structure and scraper angle, obtain the optimal structure of milling machine scraper. At the same time, the simulation results are verified. The results show that the scraper structure is improved that cemented carbide is located in the front part of the scraper substrate. Compared with the working resistance before improvement, it tends to be gentle and the peak value is smaller. The cutting front angle and the cutting back angle are optimized. The cutting front angle is 6 degrees and the cutting back angle is 9 degrees. The resultant of forces which contains the working resistance and the impact force is the least. It proves accuracy of the simulation results and provides guidance for further optimization work.

  8. High resolution solutions of the Euler equations for vortex flows

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Powell, K. G.; Rizzi, A.

    1985-01-01

    Solutions of the Euler equations are presented for M = 1.5 flow past a 70-degree-swept delta wing. At an angle of attack of 10 degrees, strong leading-edge vortices are produced. Two computational approaches are taken, based upon fully three-dimensional and conical flow theory. Both methods utilize a finite-volume discretization solved by a pseudounsteady multistage scheme. Results from the two approaches are in good agreement. Computations have been done on a 16-million-word CYBER 205 using 196 x 56 x 96 and 128 x 128 cells for the two methods. A sizable data base is generated, and some of the practical aspects of manipulating it are mentioned. The results reveal many interesting physical features of the compressible vortical flow field and also suggest new areas needing research.

  9. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  10. Optimization of the Conical Angle Design in Conical Implant-Abutment Connections: A Pilot Study Based on the Finite Element Method.

    PubMed

    Yao, Kuang-Ta; Chen, Chen-Sheng; Cheng, Cheng-Kung; Fang, Hsu-Wei; Huang, Chang-Hung; Kao, Hung-Chan; Hsu, Ming-Lun

    2018-02-01

    Conical implant-abutment connections are popular for their excellent connection stability, which is attributable to frictional resistance in the connection. However, conical angles, the inherent design parameter of conical connections, exert opposing effects on 2 influencing factors of the connection stability: frictional resistance and abutment rigidity. This pilot study employed an optimization approach through the finite element method to obtain an optimal conical angle for the highest connection stability in an Ankylos-based conical connection system. A nonlinear 3-dimensional finite element parametric model was developed according to the geometry of the Ankylos system (conical half angle = 5.7°) by using the ANSYS 11.0 software. Optimization algorithms were conducted to obtain the optimal conical half angle and achieve the minimal value of maximum von Mises stress in the abutment, which represents the highest connection stability. The optimal conical half angle obtained was 10.1°. Compared with the original design (5.7°), the optimal design demonstrated an increased rigidity of abutment (36.4%) and implant (25.5%), a decreased microgap at the implant-abutment interface (62.3%), a decreased contact pressure (37.9%) with a more uniform stress distribution in the connection, and a decreased stress in the cortical bone (4.5%). In conclusion, the methodology of design optimization to determine the optimal conical angle of the Ankylos-based system is feasible. Because of the heterogeneity of different systems, more studies should be conducted to define the optimal conical angle in various conical connection designs.

  11. Direct Numerical Simulation of Turbulent Multi-Stage Autoignition Relevant to Engine Conditions

    NASA Astrophysics Data System (ADS)

    Chen, Jacqueline

    2017-11-01

    Due to the unrivaled energy density of liquid hydrocarbon fuels combustion will continue to provide over 80% of the world's energy for at least the next fifty years. Hence, combustion needs to be understood and controlled to optimize combustion systems for efficiency to prevent further climate change, to reduce emissions and to ensure U.S. energy security. In this talk I will discuss recent progress in direct numerical simulations of turbulent combustion focused on providing fundamental insights into key `turbulence-chemistry' interactions that underpin the development of next generation fuel efficient, fuel flexible engines for transportation and power generation. Petascale direct numerical simulation (DNS) of multi-stage mixed-mode turbulent combustion in canonical configurations have elucidated key physics that govern autoignition and flame stabilization in engines and provide benchmark data for combustion model development under the conditions of advanced engines which operate near combustion limits to maximize efficiency and minimize emissions. Mixed-mode combustion refers to premixed or partially-premixed flames propagating into stratified autoignitive mixtures. Multi-stage ignition refers to hydrocarbon fuels with negative temperature coefficient behavior that undergo sequential low- and high-temperature autoignition. Key issues that will be discussed include: 1) the role of mixing in shear driven turbulence on the dynamics of multi-stage autoignition and cool flame propagation in diesel environments, 2) the role of thermal and composition stratification on the evolution of the balance of mixed combustion modes - flame propagation versus spontaneous ignition - which determines the overall combustion rate in autoignition processes, and 3) the role of cool flames on lifted flame stabilization. Finally prospects for DNS of turbulent combustion at the exascale will be discussed in the context of anticipated heterogeneous machine architectures. sponsored by DOE Office of Basic Energy Sciences and computing resources provided by the Oakridge Leadership Computing Facility through the DOE INCITE Program.

  12. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  13. Optimal pulse design for communication-oriented slow-light pulse detection.

    PubMed

    Stenner, Michael D; Neifeld, Mark A

    2008-01-21

    We present techniques for designing pulses for linear slow-light delay systems which are optimal in the sense that they maximize the signal-to-noise ratio (SNR) and signal-to-noise-plus-interference ratio (SNIR) of the detected pulse energy. Given a communication model in which input pulses are created in a finite temporal window and output pulse energy in measured in a temporally-offset output window, the SNIR-optimal pulses achieve typical improvements of 10 dB compared to traditional pulse shapes for a given output window offset. Alternatively, for fixed SNR or SNIR, window offset (detection delay) can be increased by 0.3 times the window width. This approach also invites a communication-based model for delay and signal fidelity.

  14. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  15. High-Performance AC Power Source by Applying Robust Stability Control Technology for Precision Material Machining

    NASA Astrophysics Data System (ADS)

    Chang, En-Chih

    2018-02-01

    This paper presents a high-performance AC power source by applying robust stability control technology for precision material machining (PMM). The proposed technology associates the benefits of finite-time convergent sliding function (FTCSF) and firefly optimization algorithm (FOA). The FTCSF maintains the robustness of conventional sliding mode, and simultaneously speeds up the convergence speed of the system state. Unfortunately, when a highly nonlinear loading is applied, the chatter will occur. The chatter results in high total harmonic distortion (THD) output voltage of AC power source, and even deteriorates the stability of PMM. The FOA is therefore used to remove the chatter, and the FTCSF still preserves finite system-state convergence time. By combining FTCSF with FOA, the AC power source of PMM can yield good steady-state and transient performance. Experimental results are performed in support of the proposed technology.

  16. Practical performance of real-time shot-noise measurement in continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua

    2018-01-01

    In a practical continuous-variable quantum key distribution (CVQKD) system, real-time shot-noise measurement (RTSNM) is an essential procedure for preventing the eavesdropper exploiting the practical security loopholes. However, the performance of this procedure itself is not analyzed under the real-world condition. Therefore, we indicate the RTSNM practical performance and investigate its effects on the CVQKD system. In particular, due to the finite-size effect, the shot-noise measurement at the receiver's side may decrease the precision of parameter estimation and consequently result in a tight security bound. To mitigate that, we optimize the block size for RTSNM under the ensemble size limitation to maximize the secure key rate. Moreover, the effect of finite dynamics of amplitude modulator in this scheme is studied and its mitigation method is also proposed. Our work indicates the practical performance of RTSNM and provides the real secret key rate under it.

  17. Computation of the acoustic radiation force using the finite-difference time-domain method.

    PubMed

    Cai, Feiyan; Meng, Long; Jiang, Chunxiang; Pan, Yu; Zheng, Hairong

    2010-10-01

    The computational details related to calculating the acoustic radiation force on an object using a 2-D grid finite-difference time-domain method (FDTD) are presented. The method is based on propagating the stress and velocity fields through the grid and determining the energy flow with and without the object. The axial and radial acoustic radiation forces predicted by FDTD method are in excellent agreement with the results obtained by analytical evaluation of the scattering method. In particular, the results indicate that it is possible to trap the steel cylinder in the radial direction by optimizing the width of Gaussian source and the operation frequency. As the sizes of the relating objects are smaller than or comparable to wavelength, the algorithm presented here can be easily extended to 3-D and include torque computation algorithms, thus providing a highly flexible and universally usable computation engine.

  18. A graphic approach to include dissipative-like effects in reversible thermal cycles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Ayala, Julian; Arias-Hernandez, Luis Antonio; Angulo-Brown, Fernando

    2017-05-01

    Since the decade of 1980's, a connection between a family of maximum-work reversible thermal cycles and maximum-power finite-time endoreversible cycles has been established. The endoreversible cycles produce entropy at their couplings with the external heat baths. Thus, this kind of cycles can be optimized under criteria of merit that involve entropy production terms. Meanwhile the relation between the concept of work and power is quite direct, apparently, the finite-time objective functions involving entropy production have not reversible counterparts. In the present paper we show that it is also possible to establish a connection between irreversible cycle models and reversible ones by means of the concept of "geometric dissipation", which has to do with the equivalent role of a deficit of areas between some reversible cycles and the Carnot cycle and actual dissipative terms in a Curzon-Ahlborn engine.

  19. Cross-stage immunity for malaria vaccine development.

    PubMed

    Nahrendorf, Wiebke; Scholzen, Anja; Sauerwein, Robert W; Langhorne, Jean

    2015-12-22

    A vaccine against malaria is urgently needed for control and eventual eradication. Different approaches are pursued to induce either sterile immunity directed against pre-erythrocytic parasites or to mimic naturally acquired immunity by controlling blood-stage parasite densities and disease severity. Pre-erythrocytic and blood-stage malaria vaccines are often seen as opposing tactics, but it is likely that they have to be combined into a multi-stage malaria vaccine to be optimally safe and effective. Since many antigenic targets are shared between liver- and blood-stage parasites, malaria vaccines have the potential to elicit cross-stage protection with immune mechanisms against both stages complementing and enhancing each other. Here we discuss evidence from pre-erythrocytic and blood-stage subunit and whole parasite vaccination approaches that show that protection against malaria is not necessarily stage-specific. Parasites arresting at late liver-stages especially, can induce powerful blood-stage immunity, and similarly exposure to blood-stage parasites can afford pre-erythrocytic immunity. The incorporation of a blood-stage component into a multi-stage malaria vaccine would hence not only combat breakthrough infections in the blood should the pre-erythrocytic component fail to induce sterile protection, but would also actively enhance the pre-erythrocytic potency of this vaccine. We therefore advocate that future studies should concentrate on the identification of cross-stage protective malaria antigens, which can empower multi-stage malaria vaccine development. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Coilgun Acceleration Model Containing Interactions Between Multiple Coils

    NASA Technical Reports Server (NTRS)

    Liu, Connie; Polzin, Kurt; Martin, Adam

    2017-01-01

    Electromagnetic (EM) accelerators have the potential to fill a performance range not currently being met by conventional chemical and electric propulsion systems by providing a specific impulse of 600-1000 seconds and a thrust-to-power ratio greater than 200 mN/kW. A propulsion system based on EM acceleration of small projectiles has the traditional advantages of using a pulsed system, including precise control over a range of thrust and power levels as well as rapid response and repetition rates. Furthermore, EM accelerators have lower power requirements than conventional electric propulsion systems since no plasma creation is necessary. A coilgun is a specific type of EM device where a high-current pulse through a coil of wire interacts with a conductive projectile via an induced magnetic field to accelerate the projectile. There are no physical or electrical connections to the projectile, which leads to less system degradation and a longer life expectancy. Multi-staging a coilgun by adding multiple turns on a single coil or on the projectile increases the inductance, thus permitting acceleration of the projectile to higher velocities. Previously, a simplified problem of modeling an inductively-coupled, single-coil coilgun using a circuit-based analysis coupled to the one-dimensional momentum equation through Lenz's law was solved; however, the analysis was only conducted on uncoupled coils. The problem is significantly more complicated when multiple, independently-powered coils simultaneously operate and interact with each other and the projectile through induced magnetic fields. This paper presents a multi-coil model developed with the magnetostatic finite element solver QuickField. In the model, mutual inductance values between pairs of conductors were found by first computing the magnetic field energy for different cases where individual coils or multiple coils carry current, then integrating over the entire finite element domain for each case, and finally using the definition of inductive energy storage to solve for the self and mutual inductance. The electric circuit model is coupled to the projectile through Lenz's law, with the coils coupled through mutual inductance but able to be independently triggered at different times to optimize the acceleration profile. This initial model to predict the behavior of a projectile's acceleration through a coupled, multi-coil coilgun increases the potential of building a highly efficient coilgun thruster with key advantages over other EM thruster systems, thus making it a promising candidate for satellite main propulsion or attitude control thrusters.

  1. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  2. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  3. Quadratic obstructions to small-time local controllability for scalar-input systems

    NASA Astrophysics Data System (ADS)

    Beauchard, Karine; Marbach, Frédéric

    2018-03-01

    We consider nonlinear finite-dimensional scalar-input control systems in the vicinity of an equilibrium. When the linearized system is controllable, the nonlinear system is smoothly small-time locally controllable: whatever m > 0 and T > 0, the state can reach a whole neighborhood of the equilibrium at time T with controls arbitrary small in Cm-norm. When the linearized system is not controllable, we prove that: either the state is constrained to live within a smooth strict manifold, up to a cubic residual, or the quadratic order adds a signed drift with respect to it. This drift holds along a Lie bracket of length (2 k + 1), is quantified in terms of an H-k-norm of the control, holds for controls small in W 2 k , ∞-norm and these spaces are optimal. Our proof requires only C3 regularity of the vector field. This work underlines the importance of the norm used in the smallness assumption on the control, even in finite dimension.

  4. Finite-time adaptive sliding mode force control for electro-hydraulic load simulator based on improved GMS friction model

    NASA Astrophysics Data System (ADS)

    Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun

    2018-03-01

    This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm ​combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.

  5. Joint optimization of maintenance, buffers and machines in manufacturing lines

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Nourelfath, Mustapha

    2018-01-01

    This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.

  6. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  7. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  8. Optimizing for Large Planar Fractures in Multistage Horizontal Wells in Enhanced Geothermal Systems Using a Coupled Fluid and Geomechanics Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred

    Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less

  9. A coaxial-output capacitor-loaded annular pulse forming line.

    PubMed

    Li, Rui; Li, Yongdong; Su, Jiancang; Yu, Binxiong; Xu, Xiudong; Zhao, Liang; Cheng, Jie; Zeng, Bo

    2018-04-01

    A coaxial-output capacitor-loaded annular pulse forming line (PFL) is developed in order to reduce the flat top fluctuation amplitude of the forming quasi-square pulse and improve the quality of the pulse waveform produced by a Tesla-pulse forming network (PFN) type pulse generator. A single module composed of three involute dual-plate PFNs is designed, with a characteristic impedance of 2.44 Ω, an electrical length of 15 ns, and a sustaining voltage of 60 kV. The three involute dual-plate PFNs connected in parallel have the same impedance and electrical length. Due to the existed small inductance and capacitance per unit length in each involute dual-plate PFN, the upper cut-off frequency of the PFN is increased. As a result, the entire annular PFL has better high-frequency response capability. Meanwhile, the three dual-plate PFNs discharge in parallel, which is much closer to the coaxial output. The series connecting inductance between adjacent two modules is significantly reduced when the annular PFL modules are connected in series. The pulse waveform distortion is reduced when the pulse transfers along the modules. Finally, the shielding electrode structure is applied on both sides of the module. The electromagnetic field is restricted in the module when a single module discharges, and the electromagnetic coupling between the multi-stage annular PFLs is eliminated. Based on the principle of impedance matching between the multi-stage annular PFL and the coaxial PFL, the structural optimization design of a mixed PFL in a Tesla type pulse generator is completed with the transient field-circuit co-simulation method. The multi-stage annular PFL consists of 18 stage annular PFL modules in series, with the characteristic impedance of 44 Ω, the electrical length of 15 ns, and the sustaining voltage of 1 MV. The mixed PFL can generate quasi-square electrical pulses with a pulse width of 43 ns, and the fluctuation ratio of the pulse flat top is less than 8% when the pulse rise time is about 5 ns.

  10. A coaxial-output capacitor-loaded annular pulse forming line

    NASA Astrophysics Data System (ADS)

    Li, Rui; Li, Yongdong; Su, Jiancang; Yu, Binxiong; Xu, Xiudong; Zhao, Liang; Cheng, Jie; Zeng, Bo

    2018-04-01

    A coaxial-output capacitor-loaded annular pulse forming line (PFL) is developed in order to reduce the flat top fluctuation amplitude of the forming quasi-square pulse and improve the quality of the pulse waveform produced by a Tesla-pulse forming network (PFN) type pulse generator. A single module composed of three involute dual-plate PFNs is designed, with a characteristic impedance of 2.44 Ω, an electrical length of 15 ns, and a sustaining voltage of 60 kV. The three involute dual-plate PFNs connected in parallel have the same impedance and electrical length. Due to the existed small inductance and capacitance per unit length in each involute dual-plate PFN, the upper cut-off frequency of the PFN is increased. As a result, the entire annular PFL has better high-frequency response capability. Meanwhile, the three dual-plate PFNs discharge in parallel, which is much closer to the coaxial output. The series connecting inductance between adjacent two modules is significantly reduced when the annular PFL modules are connected in series. The pulse waveform distortion is reduced when the pulse transfers along the modules. Finally, the shielding electrode structure is applied on both sides of the module. The electromagnetic field is restricted in the module when a single module discharges, and the electromagnetic coupling between the multi-stage annular PFLs is eliminated. Based on the principle of impedance matching between the multi-stage annular PFL and the coaxial PFL, the structural optimization design of a mixed PFL in a Tesla type pulse generator is completed with the transient field-circuit co-simulation method. The multi-stage annular PFL consists of 18 stage annular PFL modules in series, with the characteristic impedance of 44 Ω, the electrical length of 15 ns, and the sustaining voltage of 1 MV. The mixed PFL can generate quasi-square electrical pulses with a pulse width of 43 ns, and the fluctuation ratio of the pulse flat top is less than 8% when the pulse rise time is about 5 ns.

  11. Minimizing finite-volume discretization errors on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian

    2017-11-01

    Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.

  12. Optimal second order sliding mode control for linear uncertain systems.

    PubMed

    Das, Madhulika; Mahanta, Chitralekha

    2014-11-01

    In this paper an optimal second order sliding mode controller (OSOSMC) is proposed to track a linear uncertain system. The optimal controller based on the linear quadratic regulator method is designed for the nominal system. An integral sliding mode controller is combined with the optimal controller to ensure robustness of the linear system which is affected by parametric uncertainties and external disturbances. To achieve finite time convergence of the sliding mode, a nonsingular terminal sliding surface is added with the integral sliding surface giving rise to a second order sliding mode controller. The main advantage of the proposed OSOSMC is that the control input is substantially reduced and it becomes chattering free. Simulation results confirm superiority of the proposed OSOSMC over some existing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  13. An Optimal Order Nonnested Mixed Multigrid Method for Generalized Stokes Problems

    NASA Technical Reports Server (NTRS)

    Deng, Qingping

    1996-01-01

    A multigrid algorithm is developed and analyzed for generalized Stokes problems discretized by various nonnested mixed finite elements within a unified framework. It is abstractly proved by an element-independent analysis that the multigrid algorithm converges with an optimal order if there exists a 'good' prolongation operator. A technique to construct a 'good' prolongation operator for nonnested multilevel finite element spaces is proposed. Its basic idea is to introduce a sequence of auxiliary nested multilevel finite element spaces and define a prolongation operator as a composite operator of two single grid level operators. This makes not only the construction of a prolongation operator much easier (the final explicit forms of such prolongation operators are fairly simple), but the verification of the approximate properties for prolongation operators is also simplified. Finally, as an application, the framework and technique is applied to seven typical nonnested mixed finite elements.

  14. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  15. Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.

  16. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  17. Spatial filters for high-peak-power multistage laser amplifiers.

    PubMed

    Potemkin, A K; Barmashova, T V; Kirsanov, A V; Martyanov, M A; Khazanov, E A; Shaykin, A A

    2007-07-10

    We describe spatial filters used in a Nd:glass laser with an output pulse energy up to 300 J and a pulse duration of 1 ns. This laser is designed for pumping of a chirped-pulse optical parametric amplifier. We present data required to choose the shape and diameter of a spatial filter lens, taking into account aberrations caused by spherical surfaces. Calculation of the optimal pinhole diameter is presented. Design features of the spatial filters and the procedure of their alignment are discussed in detail.

  18. The decision tree approach to classification

    NASA Technical Reports Server (NTRS)

    Wu, C.; Landgrebe, D. A.; Swain, P. H.

    1975-01-01

    A class of multistage decision tree classifiers is proposed and studied relative to the classification of multispectral remotely sensed data. The decision tree classifiers are shown to have the potential for improving both the classification accuracy and the computation efficiency. Dimensionality in pattern recognition is discussed and two theorems on the lower bound of logic computation for multiclass classification are derived. The automatic or optimization approach is emphasized. Experimental results on real data are reported, which clearly demonstrate the usefulness of decision tree classifiers.

  19. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    DTIC Science & Technology

    2016-09-17

    test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model

  20. Securing stent during multi-stage laryngotracheoplasty--an evolved technique.

    PubMed

    Siegel, Bianca; Bent, John P

    2015-09-01

    Multi-stage laryngotracheoplasty (LTP) typically requires a stent be secured to the airway for 2-6 weeks. Our technique has evolved over time to securing the stent to the strap muscles and tying a series of knots long enough to leave the suture tail protruding through the skin incision, which simplifies stent removal. Retrospective chart review. Twenty-four patients underwent multi-stage LTP at our institution from 2007 to 2013. Eight patients were excluded from the study because they either did not have a stent placed (n=4), or they had a t-tube placed which was not sutured in place (n=4). Of the remaining 16 patients, 62.5% (n=10) had their stent secured via sutures which were buried below the skin, and 37.5% (n=6) via a long suture tail which was left protruding through the end of the skin incision. An incision was required for stent removal 100% of buried sutures patients, and 33% of exposed suture patients (p=0.0009). Average operative time for stent removal was 60min in the buried sutures group, and 25min in the exposed sutures group (p=0.0075). Securing stents via an exposed suture technique decreases the need for making a skin incision during the second stage of the operation, and significantly decreases the operative time of the second stage. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. A fictitious domain finite element method for simulations of fluid-structure interactions: The Navier-Stokes equations coupled with a moving solid

    NASA Astrophysics Data System (ADS)

    Court, Sébastien; Fournié, Michel

    2015-05-01

    The paper extends a stabilized fictitious domain finite element method initially developed for the Stokes problem to the incompressible Navier-Stokes equations coupled with a moving solid. This method presents the advantage to predict an optimal approximation of the normal stress tensor at the interface. The dynamics of the solid is governed by the Newton's laws and the interface between the fluid and the structure is materialized by a level-set which cuts the elements of the mesh. An algorithm is proposed in order to treat the time evolution of the geometry and numerical results are presented on a classical benchmark of the motion of a disk falling in a channel.

  2. Efficient calculation of higher-order optical waveguide dispersion.

    PubMed

    Mores, J A; Malheiros-Silveira, G N; Fragnito, H L; Hernández-Figueroa, H E

    2010-09-13

    An efficient numerical strategy to compute the higher-order dispersion parameters of optical waveguides is presented. For the first time to our knowledge, a systematic study of the errors involved in the higher-order dispersions' numerical calculation process is made, showing that the present strategy can accurately model those parameters. Such strategy combines a full-vectorial finite element modal solver and a proper finite difference differentiation algorithm. Its performance has been carefully assessed through the analysis of several key geometries. In addition, the optimization of those higher-order dispersion parameters can also be carried out by coupling to the present scheme a genetic algorithm, as shown here through the design of a photonic crystal fiber suitable for parametric amplification applications.

  3. Strain-Based Damage Determination Using Finite Element Analysis for Structural Health Management

    NASA Technical Reports Server (NTRS)

    Hochhalter, Jacob D.; Krishnamurthy, Thiagaraja; Aguilo, Miguel A.

    2016-01-01

    A damage determination method is presented that relies on in-service strain sensor measurements. The method employs a gradient-based optimization procedure combined with the finite element method for solution to the forward problem. It is demonstrated that strains, measured at a limited number of sensors, can be used to accurately determine the location, size, and orientation of damage. Numerical examples are presented to demonstrate the general procedure. This work is motivated by the need to provide structural health management systems with a real-time damage characterization. The damage cases investigated herein are characteristic of point-source damage, which can attain critical size during flight. The procedure described can be used to provide prognosis tools with the current damage configuration.

  4. Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-Fat; Pak, Chan-Gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  5. Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  6. Finite volume treatment of dispersion-relation-preserving and optimized prefactored compact schemes for wave propagation

    NASA Astrophysics Data System (ADS)

    Popescu, Mihaela; Shyy, Wei; Garbey, Marc

    2005-12-01

    In developing suitable numerical techniques for computational aero-acoustics, the dispersion-relation-preserving (DRP) scheme by Tam and co-workers and the optimized prefactored compact (OPC) scheme by Ashcroft and Zhang have shown desirable properties of reducing both dissipative and dispersive errors. These schemes, originally based on the finite difference, attempt to optimize the coefficients for better resolution of short waves with respect to the computational grid while maintaining pre-determined formal orders of accuracy. In the present study, finite volume formulations of both schemes are presented to better handle the nonlinearity and complex geometry encountered in many engineering applications. Linear and nonlinear wave equations, with and without viscous dissipation, have been adopted as the test problems. Highlighting the principal characteristics of the schemes and utilizing linear and nonlinear wave equations with different wavelengths as the test cases, the performance of these approaches is documented. For the linear wave equation, there is no major difference between the DRP and OPC schemes. For the nonlinear wave equations, the finite volume version of both DRP and OPC schemes offers substantially better solutions in regions of high gradient or discontinuity.

  7. Finite-time resilient decentralized control for interconnected impulsive switched systems with neutral delay.

    PubMed

    Ren, Hangli; Zong, Guangdeng; Hou, Linlin; Yang, Yi

    2017-03-01

    This paper is concerned with the problem of finite-time control for a class of interconnected impulsive switched systems with neutral delay in which the time-varying delay appears in both the state and the state derivative. The concepts of finite-time boundedness and finite-time stability are respectively extended to interconnected impulsive switched systems with neutral delay for the first time. By applying the average dwell time method, sufficient conditions are first derived to cope with the problem of finite-time boundedness and finite-time stability for interconnected impulsive switched systems with neutral delay. In addition, the purpose of finite-time resilient decentralized control is to construct a resilient decentralized state-feedback controller such that the closed-loop system is finite-time bounded and finite-time stable. All the conditions are formulated in terms of linear matrix inequalities to ensure finite-time boundedness and finite-time stability of the given system. Finally, an example is presented to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  9. Application of shifted Jacobi pseudospectral method for solving (in)finite-horizon min-max optimal control problems with uncertainty

    NASA Astrophysics Data System (ADS)

    Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.

    2018-03-01

    The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.

  10. Design of materials with prescribed nonlinear properties

    NASA Astrophysics Data System (ADS)

    Wang, F.; Sigmund, O.; Jensen, J. S.

    2014-09-01

    We systematically design materials using topology optimization to achieve prescribed nonlinear properties under finite deformation. Instead of a formal homogenization procedure, a numerical experiment is proposed to evaluate the material performance in longitudinal and transverse tensile tests under finite deformation, i.e. stress-strain relations and Poissons ratio. By minimizing errors between actual and prescribed properties, materials are tailored to achieve the target. Both two dimensional (2D) truss-based and continuum materials are designed with various prescribed nonlinear properties. The numerical examples illustrate optimized materials with rubber-like behavior and also optimized materials with extreme strain-independent Poissons ratio for axial strain intervals of εi∈[0.00, 0.30].

  11. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.

  12. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  13. Optimization of structures undergoing harmonic or stochastic excitation. Ph.D. Thesis; [atmospheric turbulence and white noise

    NASA Technical Reports Server (NTRS)

    Johnson, E. H.

    1975-01-01

    The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.

  14. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    PubMed

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. A Multistage Fluidized Bed for the Deep Removal of Sour Gases: Proof of Concept and Tray Efficiencies

    PubMed Central

    2018-01-01

    Currently there are significant amounts of natural gas that cannot be produced and treated to meet pipeline specifications, because that would not be economically viable. This work investigates a bench scale multistage fluidized bed (MSFB) with shallow beds for sour gas removal from natural gas using a commercially available supported amine sorbent. A MSFB is regarded as a promising adsorber type for deep sour gas removal to parts per million concentrations. A series of experiments was conducted using carbon dioxide as sour gas and nitrogen to mimic natural gas. Removal below 3 mol ppm was successfully demonstrated. This indicates that gas bypassing is minor (that is, good gas–solid contacting) and that apparent adsorption kinetics are fast for the amine sorbent applied. Tray efficiencies for a chemisorption/adsorption system were reported for one of the first times. Current experiments performed at atmospheric pressure strongly indicate that deep removal is possible at higher pressures in a multistage fluidized bed. PMID:29606794

  16. A Multistage Fluidized Bed for the Deep Removal of Sour Gases: Proof of Concept and Tray Efficiencies.

    PubMed

    Driessen, Rick T; Bos, Martin J; Brilman, Derk W F

    2018-03-21

    Currently there are significant amounts of natural gas that cannot be produced and treated to meet pipeline specifications, because that would not be economically viable. This work investigates a bench scale multistage fluidized bed (MSFB) with shallow beds for sour gas removal from natural gas using a commercially available supported amine sorbent. A MSFB is regarded as a promising adsorber type for deep sour gas removal to parts per million concentrations. A series of experiments was conducted using carbon dioxide as sour gas and nitrogen to mimic natural gas. Removal below 3 mol ppm was successfully demonstrated. This indicates that gas bypassing is minor (that is, good gas-solid contacting) and that apparent adsorption kinetics are fast for the amine sorbent applied. Tray efficiencies for a chemisorption/adsorption system were reported for one of the first times. Current experiments performed at atmospheric pressure strongly indicate that deep removal is possible at higher pressures in a multistage fluidized bed.

  17. Stabilization and control of distributed systems with time-dependent spatial domains

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1990-01-01

    This paper considers the problem of the stabilization and control of distributed systems with time-dependent spatial domains. The evolution of the spatial domains with time is described by a finite-dimensional system of ordinary differential equations, while the distributed systems are described by first-order or second-order linear evolution equations defined on appropriate Hilbert spaces. First, results pertaining to the existence and uniqueness of solutions of the system equations are presented. Then, various optimal control and stabilization problems are considered. The paper concludes with some examples which illustrate the application of the main results.

  18. The Pasinetti-Solow Growth Model with Optimal Saving Behaviour: A Local Bifurcation Analysis

    NASA Astrophysics Data System (ADS)

    Commendatore, P.; Palmisani, C.

    We present a discrete time version of the Pasinetti-Solow economic growth model. Workers and capitalists are assumed to save on the basis of rational choices. Workers face a finite time horizon and base their consumption choices on a life-cycle motive, whereas capitalists behave like an infinitely-lived dynasty. The accumulation of both capitalists' and workers' wealth through time is reduced to a two-dimensional map whose local asymptotic stability properties are studied. Various types of bifurcation emerge (flip, Neimark-Sacker, saddle-node and transcritical): a precondition for chaotic dynamics.

  19. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  20. Finite element solution of optimal control problems with state-control inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1992-01-01

    It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.

Top