Sample records for simulate optimal adjustments

  1. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    NASA Astrophysics Data System (ADS)

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  2. Computationally efficient optimization of radiation drives

    NASA Astrophysics Data System (ADS)

    Zimmerman, George; Swift, Damian

    2017-06-01

    For many applications of pulsed radiation, the temporal pulse shape is designed to induce a desired time-history of conditions. This optimization is normally performed using multi-physics simulations of the system, adjusting the shape until the desired response is induced. These simulations may be computationally intensive, and iterative forward optimization is then expensive and slow. In principle, a simulation program could be modified to adjust the radiation drive automatically until the desired instantaneous response is achieved, but this may be impracticable in a complicated multi-physics program. However, the computational time increment is typically much shorter than the time scale of changes in the desired response, so the radiation intensity can be adjusted so that the response tends toward the desired value. This relaxed in-situ optimization method can give an adequate design for a pulse shape in a single forward simulation, giving a typical gain in computational efficiency of tens to thousands. This approach was demonstrated for the design of laser pulse shapes to induce ramp loading to high pressure in target assemblies where different components had significantly different mechanical impedance, requiring careful pulse shaping. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  3. Humans make near-optimal adjustments of control to initial body configuration in vertical squat jumping.

    PubMed

    Bobbert, Maarten F; Richard Casius, L J; Kistemaker, Dinant A

    2013-05-01

    We investigated adjustments of control to initial posture in squat jumping. Eleven male subjects jumped from three initial postures: preferred initial posture (PP), a posture in which the trunk was rotated 18° more backward (BP) and a posture in which it was rotated 15° more forward (FP) than in PP. Kinematics, ground reaction forces and electromyograms (EMG) were collected. EMG was rectified and smoothed to obtain smoothed rectified EMG (srEMG). Subjects showed adjustments in srEMG histories, most conspicuously a shift in srEMG-onset of rectus femoris (REC): from early in BP to late in FP. Jumps from the subjects' initial postures were simulated with a musculoskeletal model comprising four segments and six Hill-type muscles, which had muscle stimulation (STIM) over time as input. STIM of each muscle changed from initial to maximal at STIM-onset, and STIM-onsets were optimized using jump height as criterion. Optimal simulated jumps from BP, PP and FP were similar to jumps of the subjects. Optimal solutions primarily differed in STIM-onset of REC: from early in BP to late in FP. Because the subjects' adjustments in srEMG-onsets were similar to adjustments of the model's optimal STIM-onsets, it was concluded that the former were near-optimal. With the model we also showed that near-maximum jumps from BP, PP and FP could be achieved when STIM-onset of REC depended on initial hip joint angle and STIM-onsets of the other muscles were posture-independent. A control theory that relies on a mapping from initial posture to STIM-onsets seems a parsimonious alternative to theories relying on internal optimal control models. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM

    PubMed Central

    Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei

    2015-01-01

    The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages. PMID:26451162

  5. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  6. Epidemic spreading on random surfer networks with optimal interaction radius

    NASA Astrophysics Data System (ADS)

    Feng, Yun; Ding, Li; Hu, Ping

    2018-03-01

    In this paper, the optimal control problem of epidemic spreading on random surfer heterogeneous networks is considered. An epidemic spreading model is established according to the classification of individual's initial interaction radii. Then, a control strategy is proposed based on adjusting individual's interaction radii. The global stability of the disease free and endemic equilibrium of the model is investigated. We prove that an optimal solution exists for the optimal control problem and the explicit form of which is presented. Numerical simulations are conducted to verify the correctness of the theoretical results. It is proved that the optimal control strategy is effective to minimize the density of infected individuals and the cost associated with the adjustment of interaction radii.

  7. Implementing dynamic root optimization in Noah-MP for simulating phreatophytic root water uptake

    USDA-ARS?s Scientific Manuscript database

    Plants are known to adjust their root systems to adapt to changing subsurface water conditions. However, most current land surface models (LSMs) use a prescribed, static root profile, which cuts off the interactions between soil moisture and root dynamics. In this paper, we implemented an optimality...

  8. Optimization analysis of thermal management system for electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  9. Simulation optimization of PSA-threshold based prostate cancer screening policies

    PubMed Central

    Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.

    2013-01-01

    We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420

  10. Design and Development of Wireless Power Transmission for Unmanned Air Vehicles

    DTIC Science & Technology

    2012-09-01

    ELECTRONIC WARFARE SYSTEMS ENGINEERING and MASTER OF SCIENCE IN ELECTRICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL September 2012...Agilent Advanced Design System (ADS). Tuning elements were added and adjusted in order to optimize the efficiency. A maximum efficiency of 57% was...investigated by a series of simulations using Agilent Advanced Design System (ADS). Tuning elements were added and adjusted

  11. Method and system for fault accommodation of machines

    NASA Technical Reports Server (NTRS)

    Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)

    2011-01-01

    A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.

  12. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  13. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGES

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  14. Adaptive temperature-accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2011-02-01

    We present three adaptive methods for optimizing the high temperature Thigh on-the-fly in temperature-accelerated dynamics (TAD) simulations. In all three methods, the high temperature is adjusted periodically in order to maximize the performance. While in the first two methods the adjustment depends on the number of observed events, the third method depends on the minimum activation barrier observed so far and requires an a priori knowledge of the optimal high temperature T^{opt}_{high}(E_a) as a function of the activation barrier Ea for each accepted event. In order to determine the functional form of T^{opt}_{high}(E_a), we have carried out extensive simulations of submonolayer annealing on the (100) surface for a variety of metals (Ag, Cu, Ni, Pd, and Au). While the results for all five metals are different, when they are scaled with the melting temperature Tm, we find that they all lie on a single scaling curve. Similar results have also been obtained for (111) surfaces although in this case the scaling function is slightly different. In order to test the performance of all three methods, we have also carried out adaptive TAD simulations of Ag/Ag(100) annealing and growth at T = 80 K and compared with fixed high-temperature TAD simulations for different values of Thigh. We find that the performance of all three adaptive methods is typically as good as or better than that obtained in fixed high-temperature TAD simulations carried out using the effective optimal fixed high temperature. In addition, we find that the final high temperatures obtained in our adaptive TAD simulations are very close to our results for T^{opt}_{high}(E_a). The applicability of the adaptive methods to a variety of TAD simulations is also briefly discussed.

  15. Model-data fusion across ecosystems: from multisite optimizations to global simulations

    NASA Astrophysics Data System (ADS)

    Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.

    2014-11-01

    This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index) measurements indicates an improvement of the simulated seasonal variations of the foliar cover for all considered PFTs.

  16. Processing of Cells' Trajectories Data for Blood Flow Simulation Model*

    NASA Astrophysics Data System (ADS)

    Slavík, Martin; Kovalčíková, Kristína; Bachratý, Hynek; Bachratá, Katarína; Smiešková, Monika

    2018-06-01

    Simulations of the red blood cells (RBCs) flow as a movement of elastic objects in a fluid, are developed to optimize microfluidic devices used for a blood sample analysis for diagnostic purposes in the medicine. Tracking cell behaviour during simulation helps to improve the model and adjust its parameters. For the optimization of the microfluidic devices, it is also necessary to analyse cell trajectories as well as likelihood and frequency of their occurrence in a particular device area, especially in the parts, where they can affect circulating tumour cells capture. In this article, we propose and verify several ways of processing and analysing the typology and trajectory stability in simulations with single or with a large number of red blood cells (RBCs) in devices with different topologies containing cylindrical obstacles.

  17. Harmony search optimization for HDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Panchal, Aditya

    In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was implemented in this thesis was within 2% of the values computed by Varian BrachyVision for the prostate, within 3% for the rectum and bladder and 6% for the urethra. The calculation of dose compared to BrachyVision was determined to be different by only 0.38%. Isodose curves were also generated and were found to be similar to BrachyVision. The comparison between Harmony Search and genetic algorithm showed that Harmony Search was over 4 times faster when compared over multiple data sets. The optimal Harmony Memory Size was found to be 5 or lower; the Harmony Memory Considering Rate was determined to be 0.95, and the Pitch Adjusting Rate was found to be 0.9. Ultimately, the effect of multithreading showed that as intensive computations such as optimization and dose calculation are involved, the threads of execution scale with the number of processors, achieving a speed increase proportional to the number of processor cores. In conclusion, this work showed that Harmony Search is a viable alternative to existing algorithms for use in HDR prostate brachytherapy optimization. Coupled with the optimal parameters for the algorithm and a multithreaded simulation, this combination has the capability to significantly decrease the time spent on minimizing optimization problems in the clinic that are time intensive, such as brachytherapy, IMRT and beam angle optimization.

  18. Simplex-method based transmission performance optimization for 100G PDM-QPSK systems with non-identical spans

    NASA Astrophysics Data System (ADS)

    Li, Yuanyuan; Gao, Guanjun; Zhang, Jie; Zhang, Kai; Chen, Sai; Yu, Xiaosong; Gu, Wanyi

    2015-06-01

    A simplex-method based optimizing (SMO) strategy is proposed to improve the transmission performance for dispersion uncompensated (DU) coherent optical systems with non-identical spans. Through analytical expression of quality of transmission (QoT), this strategy improves the Q factors effectively, while minimizing the number of erbium-doped optical fiber amplifier (EDFA) that needs to be optimized. Numerical simulations are performed for 100 Gb/s polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) channels over 10-span standard single mode fiber (SSMF) with randomly distributed span-lengths. Compared to the EDFA configurations with complete span loss compensation, the Q factor of the SMO strategy is improved by approximately 1 dB at the optimal transmitter launch power. Moreover, instead of adjusting the gains of all the EDFAs to their optimal value, the number of EDFA that needs to be adjusted for SMO is reduced from 8 to 2, showing much less tuning costs and almost negligible performance degradation.

  19. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks.

    PubMed

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-07-14

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  20. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    PubMed Central

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  1. Ancient village fire escape path planning based on improved ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Cao, Kang; Hu, QianChuan

    2017-06-01

    The roadways are narrow and perplexing in ancient villages, it brings challenges and difficulties for people to choose route to escape when a fire occurs. In this paper, a fire escape path planning method based on ant colony algorithm is presented according to the problem. The factors in the fire environment which influence the escape speed is introduced to improve the heuristic function of the algorithm, optimal transfer strategy, and adjustment pheromone volatile factor to improve pheromone update strategy adaptively, improve its dynamic search ability and search speed. Through simulation, the dynamic adjustment of the optimal escape path is obtained, and the method is proved to be feasible.

  2. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  3. Simulation and Automation of Microwave Frequency Control in Dynamic Nuclear Polarization for Solid Polarized Targets

    NASA Astrophysics Data System (ADS)

    Perera, Gonaduwage; Johnson, Ian; Keller, Dustin

    2017-09-01

    Dynamic Nuclear Polarization (DNP) is used in most of the solid polarized target scattering experiments. Those target materials must be irradiated using microwaves at a frequency determined by the difference in the nuclear Larmor and electron paramagnetic resonance (EPR) frequencies. But the resonance frequency changes with time as a result of radiation damage. Hence the microwave frequency should be adjusted accordingly. Manually adjusting the frequency can be difficult, and improper adjustments negatively impact the polarization. In order to overcome these difficulties, two controllers were developed which automate the process of seeking and maintaining the optimal frequency: one being a standalone controller for a traditional DC motor and the other a LabVIEW VI for a stepper motor configuration. Further a Monte-Carlo simulation was developed which can accurately model the polarization over time as a function of microwave frequency. In this talk, analysis of the simulated data and recent improvements to the automated system will be presented. DOE.

  4. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking

    PubMed Central

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-01-01

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243

  5. Design and simulation of MEMS-actuated adjustable optical wedge for laser beam scanners

    NASA Astrophysics Data System (ADS)

    Bahgat, Ahmed S.; Zaki, Ahmed H.; Abdo Mohamed, Mohamed; El Sherif, Ashraf Fathy

    2018-01-01

    This paper introduces both optical and mechanical design and simulation of large static deflection MOEMS actuator. The designed device is in the form of an adjustable optical wedge (AOW) laser scanner. The AOW is formed of 1.5-mm-diameter plano-convex lens separated by air gap from plano-concave fixed lens. The convex lens is actuated by staggered vertical comb drive and suspended by rectangular cross-section torsion beam. An optical analysis and simulation of air separated AOW as well as detailed design, analysis, and static simulation of comb -drive are introduced. The dynamic step response of the full system is also introduced. The analytical solution showed a good agreement with the simulation results. A general global minimum optimization algorithm is applied to the comb-drive design to minimize driving voltage. A maximum comb-drive mechanical deflection angle of 12 deg in each direction was obtained under DC actuation voltage of 32 V with a settling time of 90 ms, leading to 1-mm one-dimensional (1-D) steering of laser beam with continuous optical scan angle of 5 deg in each direction. This optimization process provided a design of larger deflection actuator with smaller driving voltage compared with other conventional devices. This enhancement could lead to better performance of MOEMS-based laser beam scanners for imaging and low-speed applications.

  6. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  7. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  8. Real-Time Optimization for use in a Control Allocation System to Recover from Pilot Induced Oscillations

    NASA Technical Reports Server (NTRS)

    Leonard, Michael W.

    2013-01-01

    Integration of the Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) System into the control system of a Short Takeoff and Landing Mobility Concept Vehicle simulation presents a challenge because the CAPIO formulation requires that constrained optimization problems be solved at the controller operating frequency. We present a solution that utilizes a modified version of the well-known L-BFGS-B solver. Despite the iterative nature of the solver, the method is seen to converge in real time with sufficient reliability to support three weeks of piloted runs at the NASA Ames Vertical Motion Simulator (VMS) facility. The results of the optimization are seen to be excellent in the vast majority of real-time frames. Deficiencies in the quality of the results in some frames are shown to be improvable with simple termination criteria adjustments, though more real-time optimization iterations would be required.

  9. Empirical analyses on the development trend of non-ferrous metal industry under China’s new normal

    NASA Astrophysics Data System (ADS)

    Li, C. X.; Liu, C. X.; Zhang, Q. L.

    2017-08-01

    The CGE model of Yunnan’s macro economy was constructed based on the input-output data of Yunnan in 2012, and the development trend of the non-ferrous metals industry (NMI) under the China’s new normal was simulated. In view of this, according to different expected economic growth, and optimized economic structure, the impact on development of Yunnan NMI was simulated. The results show that the NMI growth rate is expected to decline when the economic growth show a downward trend, but the change of the proportion is relatively small. Moreover, the structure in proportion was adjusted to realize the economic structure optimization, while the proportion of NMI in GDP will decline. In contrast, the biggest influence on the NMI is the change of economic structure. From the statistics of last two years, we can see that NMI is growing, and at the same time, its proportion is declining, which is consistent with the results of simulation. But the adjustment of economic structure will take a long time. It is need to improve the proportion of deep-processing industry, extend the industrial chain, enhance the value chain, so as to be made good use of resource advantage.

  10. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    PubMed

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Wang, Xingmin; Jiang, Lurong

    2015-01-01

    Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209

  12. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Combining local search with co-evolution in a remarkably simple way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.

    2000-05-01

    The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less

  14. Optimizing Treatment of Lung Cancer Patients with Comorbidities

    DTIC Science & Technology

    2017-10-01

    of treatment options, comorbid illness, age, sex , histology, and tumor size. We will simulate base case scenarios for stage I NSCLC for all possible...fitting adjusted logistic regression models controlling for age, sex and cancer stage. Results Overall, 5,644 (80.4%) and 1,377 (19.6%) patients

  15. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  16. A transmission power optimization with a minimum node degree for energy-efficient wireless sensor networks with full-reachability.

    PubMed

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-03-20

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.

  17. A Transmission Power Optimization with a Minimum Node Degree for Energy-Efficient Wireless Sensor Networks with Full-Reachability

    PubMed Central

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-01-01

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351

  18. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  19. Improving the performance of surgery-based clinical pathways: a simulation-optimization approach.

    PubMed

    Ozcan, Yasar A; Tànfani, Elena; Testi, Angela

    2017-03-01

    This paper aims to improve the performance of clinical processes using clinical pathways (CPs). The specific goal of this research is to develop a decision support tool, based on a simulation-optimization approach, which identify the proper adjustment and alignment of resources to achieve better performance for both the patients and the health-care facility. When multiple perspectives are present in a decision problem, critical issues arise and often require the balancing of goals. In our approach, meeting patients' clinical needs in a timely manner, and to avoid worsening of clinical conditions, we assess the level of appropriate resources. The simulation-optimization model seeks and evaluates alternative resource configurations aimed at balancing the two main objectives-meeting patient needs and optimal utilization of beds and operating rooms.Using primary data collected at a Department of Surgery of a public hospital located in Genoa, Italy. The simulation-optimization modelling approach in this study has been applied to evaluate the thyroid surgical treatment together with the other surgery-based CPs. The low rate of bed utilization and the long elective waiting lists of the specialty under study indicates that the wards were oversized while the operating room capacity was the bottleneck of the system. The model enables hospital managers determine which objective has to be given priority, as well as the corresponding opportunity costs.

  20. Impact and cost-effectiveness of snail control to achieve disease control targets for schistosomiasis.

    PubMed

    Lo, Nathan C; Gurarie, David; Yoon, Nara; Coulibaly, Jean T; Bendavid, Eran; Andrews, Jason R; King, Charles H

    2018-01-23

    Schistosomiasis is a parasitic disease that affects over 240 million people globally. To improve population-level disease control, there is growing interest in adding chemical-based snail control interventions to interrupt the lifecycle of Schistosoma in its snail host to reduce parasite transmission. However, this approach is not widely implemented, and given environmental concerns, the optimal conditions for when snail control is appropriate are unclear. We assessed the potential impact and cost-effectiveness of various snail control strategies. We extended previously published dynamic, age-structured transmission and cost-effectiveness models to simulate mass drug administration (MDA) and focal snail control interventions against Schistosoma haematobium across a range of low-prevalence (5-20%) and high-prevalence (25-50%) rural Kenyan communities. We simulated strategies over a 10-year period of MDA targeting school children or entire communities, snail control, and combined strategies. We measured incremental cost-effectiveness in 2016 US dollars per disability-adjusted life year and defined a strategy as optimally cost-effective when maximizing health gains (averted disability-adjusted life years) with an incremental cost-effectiveness below a Kenya-specific economic threshold. In both low- and high-prevalence settings, community-wide MDA with additional snail control reduced total disability by an additional 40% compared with school-based MDA alone. The optimally cost-effective scenario included the addition of snail control to MDA in over 95% of simulations. These results support inclusion of snail control in global guidelines and national schistosomiasis control strategies for optimal disease control, especially in settings with high prevalence, "hot spots" of transmission, and noncompliance to MDA. Copyright © 2018 the Author(s). Published by PNAS.

  1. Impact and cost-effectiveness of snail control to achieve disease control targets for schistosomiasis

    PubMed Central

    Yoon, Nara; Coulibaly, Jean T.; Bendavid, Eran; Andrews, Jason R.; King, Charles H.

    2018-01-01

    Schistosomiasis is a parasitic disease that affects over 240 million people globally. To improve population-level disease control, there is growing interest in adding chemical-based snail control interventions to interrupt the lifecycle of Schistosoma in its snail host to reduce parasite transmission. However, this approach is not widely implemented, and given environmental concerns, the optimal conditions for when snail control is appropriate are unclear. We assessed the potential impact and cost-effectiveness of various snail control strategies. We extended previously published dynamic, age-structured transmission and cost-effectiveness models to simulate mass drug administration (MDA) and focal snail control interventions against Schistosoma haematobium across a range of low-prevalence (5–20%) and high-prevalence (25–50%) rural Kenyan communities. We simulated strategies over a 10-year period of MDA targeting school children or entire communities, snail control, and combined strategies. We measured incremental cost-effectiveness in 2016 US dollars per disability-adjusted life year and defined a strategy as optimally cost-effective when maximizing health gains (averted disability-adjusted life years) with an incremental cost-effectiveness below a Kenya-specific economic threshold. In both low- and high-prevalence settings, community-wide MDA with additional snail control reduced total disability by an additional 40% compared with school-based MDA alone. The optimally cost-effective scenario included the addition of snail control to MDA in over 95% of simulations. These results support inclusion of snail control in global guidelines and national schistosomiasis control strategies for optimal disease control, especially in settings with high prevalence, “hot spots” of transmission, and noncompliance to MDA. PMID:29301964

  2. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  3. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  4. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  5. OPTESIM, a Versatile Toolbox for Numerical Simulation of Electron Spin Echo Envelope Modulation (ESEEM) that Features Hybrid Optimization and Statistical Assessment of Parameters

    PubMed Central

    Sun, Li; Hernandez-Guzman, Jessica; Warncke, Kurt

    2009-01-01

    Electron spin echo envelope modulation (ESEEM) is a technique of pulsed-electron paramagnetic resonance (EPR) spectroscopy. The analyis of ESEEM data to extract information about the nuclear and electronic structure of a disordered (powder) paramagnetic system requires accurate and efficient numerical simulations. A single coupled nucleus of known nuclear g value (gN) and spin I=1 can have up to eight adjustable parameters in the nuclear part of the spin Hamiltonian. We have developed OPTESIM, an ESEEM simulation toolbox, for automated numerical simulation of powder two- and three-pulse one-dimensional ESEEM for arbitrary number (N) and type (I, gN) of coupled nuclei, and arbitrary mutual orientations of the hyperfine tensor principal axis systems for N>1. OPTESIM is based in the Matlab environment, and includes the following features: (1) a fast algorithm for translation of the spin Hamiltonian into simulated ESEEM, (2) different optimization methods that can be hybridized to achieve an efficient coarse-to-fine grained search of the parameter space and convergence to a global minimum, (3) statistical analysis of the simulation parameters, which allows the identification of simultaneous confidence regions at specific confidence levels. OPTESIM also includes a geometry-preserving spherical averaging algorithm as default for N>1, and global optimization over multiple experimental conditions, such as the dephasing time ( ) for three-pulse ESEEM, and external magnetic field values. Application examples for simulation of 14N coupling (N=1, N=2) in biological and chemical model paramagnets are included. Automated, optimized simulations by using OPTESIM lead to a convergence on dramatically shorter time scales, relative to manual simulations. PMID:19553148

  6. Q-adjusting technique applied to vertical deflections estimation in a single-axis rotation INS/GPS integrated system

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao

    2016-10-01

    Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.

  7. Supersensitive ancilla-based adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Larson, Walker; Saleh, Bahaa E. A.

    2017-10-01

    The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.

  8. An Optimized Handover Scheme with Movement Trend Awareness for Body Sensor Networks

    PubMed Central

    Sun, Wen; Zhang, Zhiqiang; Ji, Lianying; Wong, Wai-Choong

    2013-01-01

    When a body sensor network (BSN) that is linked to the backbone via a wireless network interface moves from one coverage zone to another, a handover is required to maintain network connectivity. This paper presents an optimized handover scheme with movement trend awareness for BSNs. The proposed scheme predicts the future position of a BSN user using the movement trend extracted from the historical position, and adjusts the handover decision accordingly. Handover initiation time is optimized when the unnecessary handover rate is estimated to meet the requirement and the outage probability is minimized. The proposed handover scheme is simulated in a BSN deployment area in a hospital environment in UK. Simulation results show that the proposed scheme reduces the outage probability by 22% as compared with the existing hysteresis-based handover scheme under the constraint of acceptable handover rate. PMID:23736852

  9. Extremal Optimization: Methods Derived from Co-Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.G.

    1999-07-13

    We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less

  10. Numerical Simulation and Optimization of Directional Solidification Process of Single Crystal Superalloy Casting

    PubMed Central

    Zhang, Hang; Xu, Qingyan; Liu, Baicheng

    2014-01-01

    The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535

  11. A statistical data assimilation method for seasonal streamflow forecasting to optimize hydropower reservoir management in data-scarce regions

    NASA Astrophysics Data System (ADS)

    Arsenault, R.; Mai, J.; Latraverse, M.; Tolson, B.

    2017-12-01

    Probabilistic ensemble forecasts generated by the ensemble streamflow prediction (ESP) methodology are subject to biases due to errors in the hydrological model's initial states. In day-to-day operations, hydrologists must compensate for discrepancies between observed and simulated states such as streamflow. However, in data-scarce regions, little to no information is available to guide the streamflow assimilation process. The manual assimilation process can then lead to more uncertainty due to the numerous options available to the forecaster. Furthermore, the model's mass balance may be compromised and could affect future forecasts. In this study we propose a data-driven approach in which specific variables that may be adjusted during assimilation are defined. The underlying principle was to identify key variables that would be the most appropriate to modify during streamflow assimilation depending on the initial conditions such as the time period of the assimilation, the snow water equivalent of the snowpack and meteorological conditions. The variables to adjust were determined by performing an automatic variational data assimilation on individual (or combinations of) model state variables and meteorological forcing. The assimilation aimed to simultaneously optimize: (1) the error between the observed and simulated streamflow at the timepoint where the forecasts starts and (2) the bias between medium to long-term observed and simulated flows, which were simulated by running the model with the observed meteorological data on a hindcast period. The optimal variables were then classified according to the initial conditions at the time period where the forecast is initiated. The proposed method was evaluated by measuring the average electricity generation of a hydropower complex in Québec, Canada driven by this method. A test-bed which simulates the real-world assimilation, forecasting, water release optimization and decision-making of a hydropower cascade was developed to assess the performance of each individual process in the reservoir management chain. Here the proposed method was compared to the PF algorithm while keeping all other elements intact. Preliminary results are encouraging in terms of power generation and robustness for the proposed approach.

  12. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    PubMed Central

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  13. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    PubMed

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  14. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  15. Optimization of a Circularly Polarized Patch Antenna for Two Frequency Bands

    DTIC Science & Technology

    2015-09-01

    the various techniques that can be used to improve the performance of a circularly polarized microstrip patch antenna . These adjustments include... microstrip antenna . 15. SUBJECT TERMS Patch Antenna , Circular Polarization 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...Frequency Structural Simulator (HFSS) has allowed engineers to create scalable multiband microstrip antennas . Several factors were taken into

  16. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  17. Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems.

    PubMed

    Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan

    2015-01-01

    Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.

  18. Theory of Random Copolymer Fractionation in Columns

    NASA Astrophysics Data System (ADS)

    Enders, Sabine

    Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.

  19. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  20. A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905

  1. A novel harmony search algorithm based on teaching-learning strategies for 0-1 knapsack problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.

  2. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  3. Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem

    NASA Astrophysics Data System (ADS)

    Galić, Mario; Kraus, Ivan

    2016-12-01

    This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.

  4. Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.

    PubMed

    Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay

    2014-12-01

    The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.

  5. Heat transfer simulation and retort program adjustment for thermal processing of wheat based Haleem in semi-rigid aluminum containers.

    PubMed

    Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad

    2015-10-01

    A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.

  6. Design and experimentally measure a high performance metamaterial filter

    NASA Astrophysics Data System (ADS)

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  7. Evaluation of Model Specification, Variable Selection, and Adjustment Methods in Relation to Propensity Scores and Prognostic Scores in Multilevel Data

    ERIC Educational Resources Information Center

    Yu, Bing; Hong, Guanglei

    2012-01-01

    This study uses simulation examples representing three types of treatment assignment mechanisms in data generation (the random intercept and slopes setting, the random intercept setting, and a third setting with a cluster-level treatment and an individual-level outcome) in order to determine optimal procedures for reducing bias and improving…

  8. Finite element design for the HPHT synthesis of diamond

    NASA Astrophysics Data System (ADS)

    Li, Rui; Ding, Mingming; Shi, Tongfei

    2018-06-01

    The finite element method is used to simulate the steady-state temperature field in diamond synthesis cell. The 2D and 3D models of the China-type cubic press with large deformation of the synthesis cell was established successfully, which has been verified by situ measurements of synthesis cell. The assembly design, component design and process design for the HPHT synthesis of diamond based on the finite element simulation were presented one by one. The temperature field in a high-pressure synthetic cavity for diamond production is optimized by adjusting the cavity assembly. A series of analysis about the influence of the pressure media parameters on the temperature field are examined through adjusting the model parameters. Furthermore, the formation mechanism of wasteland was studied in detail. It indicates that the wasteland is inevitably exists in the synthesis sample, the distribution of growth region of the diamond with hex-octahedral is move to the center of the synthesis sample from near the heater as the power increasing, and the growth conditions of high quality diamond is locating at the center of the synthesis sample. These works can offer suggestion and advice to the development and optimization of a diamond production process.

  9. Pattern optimization of compound optical film for uniformity improvement in liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Huang, Bing-Le; Lin, Jin-tang; Ye, Yun; Xu, Sheng; Chen, En-guo; Guo, Tai-Liang

    2017-12-01

    The density dynamic adjustment algorithm (DDAA) is designed to efficiently promote the uniformity of the integrated backlight module (IBLM) by adjusting the microstructures' distribution on the compound optical film (COF), in which the COF is constructed in the SolidWorks and simulated in the TracePro. In order to demonstrate the universality of the proposed algorithm, the initial distribution is allocated by the Bezier curve instead of an empirical value. Simulation results maintains that the uniformity of the IBLM reaches over 90% only after four rounds. Moreover, the vertical and horizontal full width at half maximum of angular intensity are collimated to 24 deg and 14 deg, respectively. Compared with the current industry requirement, the IBLM has an 85% higher luminance uniformity of the emerging light, which demonstrate the feasibility and universality of the proposed algorithm.

  10. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    NASA Astrophysics Data System (ADS)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  11. Simulating storage part of application with Simgrid

    NASA Astrophysics Data System (ADS)

    Wang, Cong

    2017-10-01

    Design of a file system simulation and visualization system, using simgrid API and visualization techniques to help users understanding and improving the file system portion of their application. The core of the simulator is the API provided by simgrid, cluefs tracks and catches the procedure of the I/O operation. Run the simulator simulating this application to generate the output visualization file, which can visualize the I/O action proportion and time series. Users can also change the parameters in the configuration file to change the parameters of the storage system such as reading and writing bandwidth, users can also adjust the storage strategy, test the performance, getting reference to be much easier to optimize the storage system. We have tested all the aspects of the simulator, the results suggest that the simulator performance can be believable.

  12. Optimism, Social Support, and Adjustment in African American Women with Breast Cancer

    PubMed Central

    Shelby, Rebecca A.; Crespin, Tim R.; Wells-Di Gregorio, Sharla M.; Lamdan, Ruth M.; Siegel, Jamie E.; Taylor, Kathryn L.

    2013-01-01

    Past studies show that optimism and social support are associated with better adjustment following breast cancer treatment. Most studies have examined these relationships in predominantly non-Hispanic White samples. The present study included 77 African American women treated for nonmetastatic breast cancer. Women completed measures of optimism, social support, and adjustment within 10-months of surgical treatment. In contrast to past studies, social support did not mediate the relationship between optimism and adjustment in this sample. Instead, social support was a moderator of the optimism-adjustment relationship, as it buffered the negative impact of low optimism on psychological distress, well-being, and psychosocial functioning. Women with high levels of social support experienced better adjustment even when optimism was low. In contrast, among women with high levels of optimism, increasing social support did not provide an added benefit. These data suggest that perceived social support is an important resource for women with low optimism. PMID:18712591

  13. Optimization of a middle atmosphere diagnostic scheme

    NASA Astrophysics Data System (ADS)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  14. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  15. Blade pitch optimization methods for vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Kozak, Peter

    Vertical-axis wind turbines (VAWTs) offer an inherently simpler design than horizontal-axis machines, while their lower blade speed mitigates safety and noise concerns, potentially allowing for installation closer to populated and ecologically sensitive areas. While VAWTs do offer significant operational advantages, development has been hampered by the difficulty of modeling the aerodynamics involved, further complicated by their rotating geometry. This thesis presents results from a simulation of a baseline VAWT computed using Star-CCM+, a commercial finite-volume (FVM) code. VAWT aerodynamics are shown to be dominated at low tip-speed ratios by dynamic stall phenomena and at high tip-speed ratios by wake-blade interactions. Several optimization techniques have been developed for the adjustment of blade pitch based on finite-volume simulations and streamtube models. The effectiveness of the optimization procedure is evaluated and the basic architecture for a feedback control system is proposed. Implementation of variable blade pitch is shown to increase a baseline turbine's power output between 40%-100%, depending on the optimization technique, improving the turbine's competitiveness when compared with a commercially-available horizontal-axis turbine.

  16. Beam-steering efficiency optimization method based on a rapid-search algorithm for liquid crystal optical phased array.

    PubMed

    Xiao, Feng; Kong, Lingjiang; Chen, Jian

    2017-06-01

    A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.

  17. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  18. Focus determination for the James Webb Space Telescope Science Instruments: A Survey of Methods

    NASA Technical Reports Server (NTRS)

    Davila, Pamela S.; Bolcar, Matthew R.; Boss, B.; Dean, B.; Hapogian, J.; Howard, J.; Unger, B.; Wilson, M.

    2006-01-01

    The James Webb Space Telescope (JWST) is a segmented deployable telescope that will require on-orbit alignment using the Near Infrared Camera as a wavefront sensor. The telescope will be aligned by adjusting seven degrees of freedom on each of 18 primary mirror segments and five degrees of freedom on the secondary mirror to optimize the performance of the telescope and camera at a wavelength of 2 microns. With the completion of these adjustments, the telescope focus is set and the optical performance of each of the other science instruments should then be optimal without making further telescope focus adjustments for each individual instrument. This alignment approach requires confocality of the instruments after integration and alignment to the composite metering structure, which will be verified during instrument level testing at Goddard Space Flight Center with a telescope optical simulator. In this paper, we present the results from a study of several analytical approaches to determine the focus for each instrument. The goal of the study is to compare the accuracies obtained for each method, and to select the most feasible for use during optical testing.

  19. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less

  20. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method.

    PubMed

    Gómez, Fátima Somovilla; Lorza, Rubén Lostado; Bobadilla, Marina Corral; García, Rubén Escribano

    2017-09-21

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3-L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust.

  1. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method

    PubMed Central

    Somovilla Gómez, Fátima

    2017-01-01

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust. PMID:28934161

  2. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  3. The application of immune genetic algorithm in main steam temperature of PID control of BP network

    NASA Astrophysics Data System (ADS)

    Li, Han; Zhen-yu, Zhang

    In order to overcome the uncertainties, large delay, large inertia and nonlinear property of the main steam temperature controlled object in the power plant, a neural network intelligent PID control system based on immune genetic algorithm and BP neural network is designed. Using the immune genetic algorithm global search optimization ability and good convergence, optimize the weights of the neural network, meanwhile adjusting PID parameters using BP network. The simulation result shows that the system is superior to conventional PID control system in the control of quality and robustness.

  4. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.

    PubMed

    Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei

    2017-12-04

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  5. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    PubMed Central

    Zhang, Senlin; Zhang, Qunfei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541

  6. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    PubMed Central

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  7. Research and application of an intelligent control system in central air-conditioning based on energy consumption simulation

    NASA Astrophysics Data System (ADS)

    Cao, Ling; Che, Wenbin

    2018-05-01

    For the central air-conditioning energy-saving, it is common practice to use a wide range of PTD controllers in engineering to optimize energy savings. However, the shortcomings of the PTD controller have also been magnified on this issue, such as: calculation accuracy is not enough, the calculation time is too long. Particle swarm optimization has the advantage of fast convergence. This paper is based on Particle Swarm Optimization apply in PTD controller tuning parameters in order to achieve the purpose of saving energy while ensuring comfort. The algorithm proposed in this paper can adjust the weight according to the change of population fitness, reduce the weights of particles with lower fitness and enhance the weights of particles with higher fitness in the population, and fully release the population vitality. The method in this paper is validated by the TRNSYS model based on the central air-conditioning system. The experimental results show that the room temperature fluctuation is small, the overshoot is small, the adjustment speed is fast, and the energy-saving fluctuates at 10%.

  8. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  9. Optimization of Geothermal Well Placement under Geological Uncertainty

    NASA Astrophysics Data System (ADS)

    Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian

    2017-04-01

    Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.

  10. Lipid and Creatinine Adjustment to Evaluate Health Effects of Environmental Exposures.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Buckley, Jessie P

    2017-03-01

    Urine- and serum-based biomarkers are useful for assessing individuals' exposure to environmental factors. However, variations in urinary creatinine (a measure of dilution) or serum lipid levels, if not adequately corrected for, can directly impact biomarker concentrations and bias exposure-disease association measures. Recent methodological literature has considered the complex relationships between creatinine or serum lipid levels, exposure biomarkers, outcomes, and other potentially relevant factors using directed acyclic graphs and simulation studies. The optimal measures of urinary dilution and serum lipids have also been investigated. Existing evidence supports the use of covariate-adjusted standardization plus creatinine adjustment for urinary biomarkers and standardization plus serum lipid adjustment for lipophilic, serum-based biomarkers. It is unclear which urinary dilution measure is best, but all serum lipid measures performed similarly. Future research should assess methods for pooled biomarkers and for studying diseases and exposures that affect creatinine or serum lipids directly.

  11. Quantifying the effects of on-the-fly changes of seating configuration on the stability of a manual wheelchair.

    PubMed

    Thomas, Louise; Borisoff, Jaimie; Sparrey, Carolyn J

    2017-07-01

    In general, manual wheelchairs are designed with a fixed frame, which is not optimal for every situation. Adjustable on the fly seating allow users to rapidly adapt their wheelchair configuration to suit different tasks. These changes move the center of gravity (CoG) of the system, altering the wheelchair stability and maneuverability. To assess these changes, a computer simulation of a manual wheelchair was created with adjustable seat, backrest, rear axle position and user position, and validated with experimental testing. The stability of the wheelchair was most affected by the position of the rear axle, but adjustments to the backrest and seat angles also result in stability improvements that could be used when wheeling in the community. These findings describe the most influential parameters for wheelchair stability and maneuverability, as well as provide quantitative guidelines for the use of manual wheelchairs with on the fly adjustable seats.

  12. Lipid and Creatinine Adjustment to Evaluate Health Effects of Environmental Exposures

    PubMed Central

    O’Brien, Katie M.; Upson, Kristen; Buckley, Jessie P.

    2017-01-01

    Purpose of review Urine- and serum-based biomarkers are useful for assessing individuals’ exposure to environmental factors. However, variations in urinary creatinine (a measure of dilution) or serum lipid levels, if not adequately corrected for, can directly impact biomarker concentrations and bias exposure-disease association measures. Recent findings Recent methodological literature has considered the complex relationships between creatinine or serum lipid levels, exposure biomarkers, outcomes, and other potentially relevant factors using directed acyclic graphs and simulation studies. The optimal measures of urinary dilution and serum lipids have also been investigated. Summary Existing evidence supports the use of covariate-adjusted standardization plus creatinine adjustment for urinary biomarkers and standardization plus serum lipid adjustment for lipophilic, serum-based biomarkers. It is unclear which urinary dilution measure is best, but all serum lipid measures performed similarly. Future research should assess methods for pooled biomarkers and for studying diseases and exposures that affect creatinine or serum lipids directly. PMID:28097619

  13. [Study of pretreatment on microfiltration of huanglian jiedu decoction with ceramic membranes based on solution environment regulation theory].

    PubMed

    Li, Bo; Zhang, Lian-Jun; Guo, Li-Wei; Fu, Ting-Ming; Zhu, Hua-Xu

    2014-01-01

    To optimize the pretreatment of Huanglian Jiedu decoction before ceramic membranes and verify the effect of different pretreatments in multiple model system existed in Chinese herb aqueous extract. The solution environment of Huanglian Jiedu decoction was adjusted by different pretreatments. The flux of microfiltration, transmittance of the ingredients and removal rate of common polymers were as indicators to study the effect of different solution environment It was found that flocculation had higher stable permeate flux, followed by vacuuming filtration and adjusting pH to 9. The removal rate of common polymers was comparatively high. The removal rate of protein was slightly lower than the simulated solution. The transmittance of index components were higher when adjust pH and flocculation. Membrane blocking resistance was the major factor in membrane fouling. Based on the above indicators, the effect of flocculation was comparatively significant, followed by adjusting pH to 9.

  14. Design of a monitor and simulation terminal (master) for space station telerobotics and telescience

    NASA Technical Reports Server (NTRS)

    Lopez, L.; Konkel, C.; Harmon, P.; King, S.

    1989-01-01

    Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.

  15. Nonsequential modeling of laser diode stacks using Zemax: simulation, optimization, and experimental validation.

    PubMed

    Coluccelli, Nicola

    2010-08-01

    Modeling a real laser diode stack based on Zemax ray tracing software that operates in a nonsequential mode is reported. The implementation of the model is presented together with the geometric and optical parameters to be adjusted to calibrate the model and to match the simulated intensity irradiance profiles with the experimental profiles. The calibration of the model is based on a near-field and a far-field measurement. The validation of the model has been accomplished by comparing the simulated and experimental transverse irradiance profiles at different positions along the caustic formed by a lens. Spot sizes and waist location are predicted with a maximum error below 6%.

  16. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  17. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  18. Property-process relations in simulated clinical abrasive adjusting of dental ceramics.

    PubMed

    Yin, Ling

    2012-12-01

    This paper reports on property-process correlations in simulated clinical abrasive adjusting of a wide range of dental restorative ceramics using a dental handpiece and diamond burs. The seven materials studied included four mica-containing glass ceramics, a feldspathic porcelain, a glass-infiltrated alumina, and a yttria-stabilized tetragonal zirconia. The abrasive adjusting process was conducted under simulated clinical conditions using diamond burs and a clinical dental handpiece. An attempt was made to establish correlations between process characteristics in terms of removal rate, chipping damage, and surface finish and material mechanical properties of hardness, fracture toughness and Young's modulus. The results show that the removal rate is mainly a function of hardness, which decreases nonlinearly with hardness. No correlations were noted between the removal rates and the complex relations of hardness, Young's modulus and fracture toughness. Surface roughness was primarily a linear function of diamond grit size and was relatively independent of materials. Chipping damage in terms of the average chipping width decreased with fracture toughness except for glass-infiltrated alumina. It also had higher linear correlations with critical strain energy release rates (R²=0.66) and brittleness (R²=0.62) and a lower linear correlation with indices of brittleness (R²=0.32). Implications of these results can provide guidance for the microstructural design of dental ceramics, optimize performance, and guide the proper selection of technical parameters in clinical abrasive adjusting conducted by dental practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A kicking simulator to investigate the foot-ball interaction during a rugby place kick.

    PubMed

    Minnaar, Nick; van den Heever, Dawie J

    2015-01-01

    Foot-ball interaction is an important aspect in rugby place kicking but has received very little attention in literature. This preliminary study presents an adjustable mechanical kicking simulator used to investigate different foot positions and orientations during the foot-ball interaction on resultant ball motion. It was found that changes in foot position and orientation during ball contact can have a large influence on ball motion. It is believed that with further research an optimal place-kicking technique can be found to maximize energy transfer to the ball while still maintaining accuracy.

  20. User-preferred color temperature adjustment for smartphone display under varying illuminants

    NASA Astrophysics Data System (ADS)

    Choi, Kyungah; Suk, Hyeon-Jeong

    2014-06-01

    The study aims to investigate the user-preferred color temperature adjustment for smartphone displays by observing the effect of the illuminant's chromaticity and intensity on the optimal white points preferred by users. For visual examination, subjects evaluated 14 display stimuli presented on the Samsung Galaxy S3 under 19 ambient illuminants. The display stimuli were composed of 14 nuanced whites varying in color temperature from 2900 to 18,900 K. The illuminant conditions varied with combinations of color temperature (2600 to 20,100 K) and illuminance level (30 to 3100 lx) that simulated daily lighting experiences. The subjects were asked to assess the optimal level of the display color temperatures based on their mental representation of the ideal white point. The study observed a positive correlation between the illuminant color temperatures and the optimal display color temperatures (r=0.89, p<0.05). However, the range of the color temperature of the smartphone display was much narrower than that of the illuminants. Based on the assessments by 100 subjects, a regression formula was derived to predict the adjustment of user-preferred color temperature under changing illuminant chromaticity. The formula is as follows: [Display Tcp=6534.75 log (Illuminant Tcp)-16304.68 (R=0.87, p<0.05)]. Moreover, supporting previous studies on color reproduction, the effect of illuminant chromaticity was relatively weaker under lower illuminance. The results of this experiment could be used as a theoretical basis for designers and manufacturers to adjust user-preferred color temperature for smartphone displays under various illuminant conditions.

  1. Evaluation of ride quality measurement procedures by subjective experiments using simulators

    NASA Technical Reports Server (NTRS)

    Klauder, L. T., Jr.; Clevenson, S. A.

    1975-01-01

    Since ride quality is, by definition, a matter of passenger response, there is need for a qualification procedure (QP) for establishing the degree to which any particular ride quality measurement procedure (RQMP) does correlate with passenger responses. Once established, such a QP will provide very useful guidance for optimal adjustment of the various parameters which any given RQMP contains. A QP is proposed based on use of a ride motion simulator and on test subject responses to recordings of actual vehicle motions. Test subject responses are used to determine simulator gain settings for the individual recordings such as to make all of the simulated rides equally uncomfortable to the test subjects. Simulator platform accelerations vs. time are recorded with each ride at its equal discomfort gain setting. The equal discomfort platform acceleration recordings are then digitzed.

  2. Optimal Robust Matching of Engine Models to Test Data

    DTIC Science & Technology

    2009-02-28

    Monte Carlo process 19 Figure 7: Flowchart of SVD Calculations 22 Figure 8: Schematic Diagram of NPSS Engine Model Components 24 Figure 9: PW2037...System Simulation ( NPSS ). NPSS is an object-oriented modeling environment widely used throughout industry and the USAF. With NPSS , the engine is...34 modifiers are available for adjusting the component representations. The scripting language in NPSS allowed for easy implementation of each solution

  3. Study on numerical simulation of asymmetric structure aluminum profile extrusion based on ALE method

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Qu, Yuan; Ding, Siyi; Liu, Changhui; Yang, Fuyong

    2018-05-01

    Using the HyperXtrude module based on the Arbitrary Lagrangian-Eulerian (ALE) finite element method, the paper simulates the steady extrusion process of the asymmetric structure aluminum die successfully. A verification experiment is carried out to verify the simulation results. Having obtained and analyzed the stress-strain field, temperature field and extruded velocity of the metal, it confirms that the simulation prediction results and the experimental schemes are consistent. The scheme of the die correction and optimization are discussed at last. By adjusting the bearing length and core thickness, adopting the structure of feeder plate protection, short shunt bridge in the upper die and three-level bonding container in the lower die to control the metal flowing, the qualified aluminum profile can be obtained.

  4. Model Refinement and Simulation of Groundwater Flow in Clinton, Eaton, and Ingham Counties, Michigan

    USGS Publications Warehouse

    Luukkonen, Carol L.

    2010-01-01

    A groundwater-flow model that was constructed in 1996 of the Saginaw aquifer was refined to better represent the regional hydrologic system in the Tri-County region, which consists of Clinton, Eaton, and Ingham Counties, Michigan. With increasing demand for groundwater, the need to manage withdrawals from the Saginaw aquifer has become more important, and the 1996 model could not adequately address issues of water quality and quantity. An updated model was needed to better address potential effects of drought, locally high water demands, reduction of recharge by impervious surfaces, and issues affecting water quality, such as contaminant sources, on water resources and the selection of pumping rates and locations. The refinement of the groundwater-flow model allows simulations to address these issues of water quantity and quality and provides communities with a tool that will enable them to better plan for expansion and protection of their groundwater-supply systems. Model refinement included representation of the system under steady-state and transient conditions, adjustments to the estimated regional groundwater-recharge rates to account for both temporal and spatial differences, adjustments to the representation and hydraulic characteristics of the glacial deposits and Saginaw Formation, and updates to groundwater-withdrawal rates to reflect changes from the early 1900s to 2005. Simulations included steady-state conditions (in which stresses remained constant and changes in storage were not included) and transient conditions (in which stresses changed in annual and monthly time scales and changes in storage within the system were included). These simulations included investigation of the potential effects of reduced recharge due to impervious areas or to low-rainfall/drought conditions, delineation of contributing areas with recent pumping rates, and optimization of pumping subject to various quantity and quality constraints. Simulation results indicate potential declines in water levels in both the upper glacial aquifer and the upper sandstone bedrock aquifer under steady-state and transient conditions when recharge was reduced by 20 and 50 percent in urban areas. Transient simulations were done to investigate reduced recharge due to low rainfall and increased pumping to meet anticipated future demand with 24 months (2 years) of modified recharge or modified recharge and pumping rates. During these two simulation years, monthly recharge rates were reduced by about 30 percent, and monthly withdrawal rates for Lansing area production wells were increased by 15 percent. The reduction in the amount of water available to recharge the groundwater system affects the upper model layers representing the glacial aquifers more than the deeper bedrock layers. However, with a reduction in recharge and an increase in withdrawals from the bedrock aquifer, water levels in the bedrock layers are affected more than those in the glacial layers. Differences in water levels between simulations with reduced recharge and reduced recharge with increased pumping are greatest in the Lansing area and least away from pumping centers, as expected. Additionally, the increases in pumping rates had minimal effect on most simulated streamflows. Additional simulations included updating the estimated 10-year wellhead-contributing areas for selected Lansing-area wells under 2006-7 pumping conditions. Optimization of groundwater withdrawals with a water-resource management model was done to determine withdrawal rates while minimizing operational costs and to determine withdrawal locations to achieve additional capacity while meeting specified head constraints. In these optimization scenarios, the desired groundwater withdrawals are achieved by simulating managed wells (where pumping rates can be optimized) and unmanaged wells (where pumping rates are not optimized) and by using various combinations of existing and proposed well locations.

  5. An Improved Dynamic Model for the Respiratory Response to Exercise

    PubMed Central

    Serna, Leidy Y.; Mañanas, Miguel A.; Hernández, Alher M.; Rabinovich, Roberto A.

    2018-01-01

    Respiratory system modeling has been extensively studied in steady-state conditions to simulate sleep disorders, to predict its behavior under ventilatory diseases or stimuli and to simulate its interaction with mechanical ventilation. Nevertheless, the studies focused on the instantaneous response are limited, which restricts its application in clinical practice. The aim of this study is double: firstly, to analyze both dynamic and static responses of two known respiratory models under exercise stimuli by using an incremental exercise stimulus sequence (to analyze the model responses when step inputs are applied) and experimental data (to assess prediction capability of each model). Secondly, to propose changes in the models' structures to improve their transient and stationary responses. The versatility of the resulting model vs. the other two is shown according to the ability to simulate ventilatory stimuli, like exercise, with a proper regulation of the arterial blood gases, suitable constant times and a better adjustment to experimental data. The proposed model adjusts the breathing pattern every respiratory cycle using an optimization criterion based on minimization of work of breathing through regulation of respiratory frequency. PMID:29467674

  6. Design and optimization analysis of dual material gate on DG-IMOS

    NASA Astrophysics Data System (ADS)

    Singh, Sarabdeep; Raman, Ashish; Kumar, Naveen

    2017-12-01

    An impact ionization MOSFET (IMOS) is evolved for overcoming the constraint of less than 60 mV/decade sub-threshold slope (SS) of conventional MOSFET at room temperature. In this work, first, the device performance of the p-type double gate impact ionization MOSFET (DG-IMOS) is optimized by adjusting the device design parameters. The adjusted parameters are ratio of gate and intrinsic length, gate dielectric thickness and gate work function. Secondly, the DMG (dual material gate) DG-IMOS is proposed and investigated. This DMG DG-IMOS is further optimized to obtain the best possible performance parameters. Simulation results reveal that DMG DG-IMOS when compared to DG-IMOS, shows better I ON, I ON/I OFF ratio, and RF parameters. Results show that by properly tuning the lengths of two materials at a ratio of 1.5 in DMG DG-IMOS, optimized performance is achieved including I ON/I OFF ratio of 2.87 × 109 A/μm with I ON as 11.87 × 10-4 A/μm and transconductance of 1.06 × 10-3 S/μm. It is analyzed that length of drain side material should be greater than the length of source side material to attain the higher transconductance in DMG DG-IMOS.

  7. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  8. Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk

    NASA Astrophysics Data System (ADS)

    Long, C. C.; Marsden, A. L.; Bazilevs, Y.

    2014-10-01

    In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.

  9. Optomechanical design and analysis of a self-adaptive mounting method for optimizing phase matching of large potassium dihydrogen phosphate converter

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Tian, Menjiya; Quan, Xusong; Pei, Guoqing; Wang, Hui; Liu, Tianye; Long, Kai; Xiong, Zhao; Rong, Yiming

    2017-11-01

    Surface control and phase matching of large laser conversion optics are urgent requirements and huge challenges in high-power solid-state laser facilities. A self-adaptive, nanocompensating mounting configuration of a large aperture potassium dihydrogen phosphate (KDP) frequency doubler is proposed based on a lever-type surface correction mechanism. A mechanical, numerical, and optical model is developed and employed to evaluate comprehensive performance of this mounting method. The results validate the method's advantages of surface adjustment and phase matching improvement. In addition, the optimal value of the modulation force is figured out through a series of simulations and calculations.

  10. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  11. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  12. A Novel Adjustment Method for Shearer Traction Speed through Integration of T-S Cloud Inference Network and Improved PSO

    PubMed Central

    Si, Lei; Wang, Zhongbin; Yang, Yinwei

    2014-01-01

    In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358

  13. A fuzzy discrete harmony search algorithm applied to annual cost reduction in radial distribution systems

    NASA Astrophysics Data System (ADS)

    Ameli, Kazem; Alfi, Alireza; Aghaebrahimi, Mohammadreza

    2016-09-01

    Similarly to other optimization algorithms, harmony search (HS) is quite sensitive to the tuning parameters. Several variants of the HS algorithm have been developed to decrease the parameter-dependency character of HS. This article proposes a novel version of the discrete harmony search (DHS) algorithm, namely fuzzy discrete harmony search (FDHS), for optimizing capacitor placement in distribution systems. In the FDHS, a fuzzy system is employed to dynamically adjust two parameter values, i.e. harmony memory considering rate and pitch adjusting rate, with respect to normalized mean fitness of the harmony memory. The key aspect of FDHS is that it needs substantially fewer iterations to reach convergence in comparison with classical discrete harmony search (CDHS). To the authors' knowledge, this is the first application of DHS to specify appropriate capacitor locations and their best amounts in the distribution systems. Simulations are provided for 10-, 34-, 85- and 141-bus distribution systems using CDHS and FDHS. The results show the effectiveness of FDHS over previous related studies.

  14. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  15. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  16. Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan

    In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.

  17. Coupling control and optimization at the Canadian Light Source

    NASA Astrophysics Data System (ADS)

    Wurtz, W. A.

    2018-06-01

    We present a detailed study using the skew quadrupoles in the Canadian Light Source storage ring lattice to control the parameters of a coupled lattice. We calculate the six-dimensional beam envelop matrix and use it to produce a variety of objective functions for optimization using the Multi-Objective Particle Swarm Optimization (MOPSO) algorithm. MOPSO produces a number of skew quadrupole configurations that we apply to the storage ring. We use the X-ray synchrotron radiation diagnostic beamline to image the beam and we make measurements of the vertical dispersion and beam lifetime. We observe satisfactory agreement between the measurements and simulations. These methods can be used to adjust phase space coupling in a rational way and have applications to fine-tuning the vertical emittance and Touschek lifetime and measuring the gas scattering lifetime.

  18. Ion acceleration enhanced by target ablation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, S.; State Key Laboratory of Nuclear Physics and Technology, and Key Lab of HEDPS, CAPT, Peking University, Beijing 100871; Institute of Radiation, Helmholtz-Zentrum Dresden-Rossendorf, 01314 Dresden

    2015-07-15

    Laser proton acceleration can be enhanced by using target ablation, due to the energetic electrons generated in the ablation preplasma. When the ablation pulse matches main pulse, the enhancement gets optimized because the electrons' energy density is highest. A scaling law between the ablation pulse and main pulse is confirmed by the simulation, showing that for given CPA pulse and target, proton energy improvement can be achieved several times by adjusting the target ablation.

  19. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  20. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  1. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE PAGES

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...

    2017-10-17

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  2. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  3. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  4. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasrabadi, M. N., E-mail: mnnasrabadi@ast.ui.ac.ir; Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  5. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of 64Cu and 67Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    NASA Astrophysics Data System (ADS)

    Nasrabadi, M. N.; Sepiani, M.

    2015-03-01

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  6. Introducing folding stability into the score function for computational design of RNA-binding peptides boosts the probability of success.

    PubMed

    Xiao, Xingqing; Agris, Paul F; Hall, Carol K

    2016-05-01

    A computational strategy that integrates our peptide search algorithm with atomistic molecular dynamics simulation was used to design rational peptide drugs that recognize and bind to the anticodon stem and loop domain (ASL(Lys3)) of human tRNAUUULys3 for the purpose of interrupting HIV replication. The score function of the search algorithm was improved by adding a peptide stability term weighted by an adjustable factor λ to the peptide binding free energy. The five best peptide sequences associated with five different values of λ were determined using the search algorithm and then input in atomistic simulations to examine the stability of the peptides' folded conformations and their ability to bind to ASL(Lys3). Simulation results demonstrated that setting an intermediate value of λ achieves a good balance between optimizing the peptide's binding ability and stabilizing its folded conformation during the sequence evolution process, and hence leads to optimal binding to the target ASL(Lys3). Thus, addition of a peptide stability term significantly improves the success rate for our peptide design search. © 2016 Wiley Periodicals, Inc.

  7. Adjoint-Based Climate Model Tuning: Application to the Planet Simulator

    NASA Astrophysics Data System (ADS)

    Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef

    2018-01-01

    The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.

  8. A structural topological optimization method for multi-displacement constraints and any initial topology configuration

    NASA Astrophysics Data System (ADS)

    Rong, J. H.; Yi, J. H.

    2010-10-01

    In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.

  9. Simulation on an optimal combustion control strategy for 3-D temperature distributions in tangentially pc-fired utility boiler furnaces.

    PubMed

    Wang, Xi-fen; Zhou, Huai-chun

    2005-01-01

    The control of 3-D temperature distribution in a utility boiler furnace is essential for the safe, economic and clean operation of pc-fired furnace with multi-burner system. The development of the visualization of 3-D temperature distributions in pc-fired furnaces makes it possible for a new combustion control strategy directly with the furnace temperature as its goal to improve the control quality for the combustion processes. Studied in this paper is such a new strategy that the whole furnace is divided into several parts in the vertical direction, and the average temperature and its bias from the center in every cross section can be extracted from the visualization results of the 3-D temperature distributions. In the simulation stage, a computational fluid dynamics (CFD) code served to calculate the 3-D temperature distributions in a furnace, then a linear model was set up to relate the features of the temperature distributions with the input of the combustion processes, such as the flow rates of fuel and air fed into the furnaces through all the burners. The adaptive genetic algorithm was adopted to find the optimal combination of the whole input parameters which ensure to form an optimal 3-D temperature field in the furnace desired for the operation of boiler. Simulation results showed that the strategy could soon find the factors making the temperature distribution apart from the optimal state and give correct adjusting suggestions.

  10. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    NASA Astrophysics Data System (ADS)

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  11. A Method of Dynamic Extended Reactive Power Optimization in Distribution Network Containing Photovoltaic-Storage System

    NASA Astrophysics Data System (ADS)

    Wang, Wu; Huang, Wei; Zhang, Yongjun

    2018-03-01

    The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.

  12. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    PubMed

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  13. An application of PSO algorithm for multi-criteria geometry optimization of printed low-pass filters based on conductive periodic structures

    NASA Astrophysics Data System (ADS)

    Steckiewicz, Adam; Butrylo, Boguslaw

    2017-08-01

    In this paper we discussed the results of a multi-criteria optimization scheme as well as numerical calculations of periodic conductive structures with selected geometry. Thin printed structures embedded on a flexible dielectric substrate may be applied as simple, cheap, passive low-pass filters with an adjustable cutoff frequency in low (up to 1 MHz) radio frequency range. The analysis of an electromagnetic phenomena in presented structures was realized on the basis of a three-dimensional numerical model of three proposed geometries of periodic elements. The finite element method (FEM) was used to obtain a solution of an electromagnetic harmonic field. Equivalent lumped electrical parameters of printed cells obtained in such manner determine the shape of an amplitude transmission characteristic of a low-pass filter. A nonlinear influence of a printed cell geometry on equivalent parameters of cells electric model, makes it difficult to find the desired optimal solution. Therefore an optimization problem of optimal cell geometry estimation with regard to an approximation of the determined amplitude transmission characteristic with an adjusted cutoff frequency, was obtained by the particle swarm optimization (PSO) algorithm. A dynamically suitable inertia factor was also introduced into the algorithm to improve a convergence to a global extremity of a multimodal objective function. Numerical results as well as PSO simulation results were characterized in terms of approximation accuracy of predefined amplitude characteristics in a pass-band, stop-band and cutoff frequency. Three geometries of varying degrees of complexity were considered and their use in signal processing systems was evaluated.

  14. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  15. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  16. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  17. Energy configuration optimization of submerged propeller in oxidation ditch based on CFD

    NASA Astrophysics Data System (ADS)

    Wu, S. Y.; Zhou, D. Q.; Zheng, Y.

    2012-11-01

    The submerged propeller is presented as an important dynamic source in oxidation ditch. In order to guarantee the activated sludge not deposit, it is necessary to own adequate drive power. Otherwise, it will cause many problems such as the awful mixed flow and the great consuming of energy. At present, carrying on the installation optimization of submerged propeller in oxidation ditch mostly depends on experience. So it is necessary to use modern design method to optimize the installation position and number of submerged propeller, and to research submerged propeller flow field characteristics. The submerged propeller internal flow is simulated by using CFD software FLUENT6.3. Based on Navier-Stokes equations and standard k - ɛ turbulence model, the flow was simulated by using a SIMPLE algorithm. The results indicate that the submerged propeller installation position change could avoid the condition of back mixing, which caused by the strong drive. Besides, the problem of sludge deposit and the low velocity in the bend which caused by the drive power attenuation could be solved. By adjusting the submerged propeller number, the least power density that the mixing drive needed could be determined and saving energy purpose could be achieved. The study can provide theoretical guidance for optimize the submerged propeller installation position and determine submerged propeller number.

  18. Cross-entropy optimization for neuromodulation.

    PubMed

    Brar, Harleen K; Yunpeng Pan; Mahmoudi, Babak; Theodorou, Evangelos A

    2016-08-01

    This study presents a reinforcement learning approach for the optimization of the proportional-integral gains of the feedback controller represented in a computational model of epilepsy. The chaotic oscillator model provides a feedback control systems view of the dynamics of an epileptic brain with an internal feedback controller representative of the natural seizure suppression mechanism within the brain circuitry. Normal and pathological brain activity is simulated in this model by adjusting the feedback gain values of the internal controller. With insufficient gains, the internal controller cannot provide enough feedback to the brain dynamics causing an increase in correlation between different brain sites. This increase in synchronization results in the destabilization of the brain dynamics, which is representative of an epileptic seizure. To provide compensation for an insufficient internal controller an external controller is designed using proportional-integral feedback control strategy. A cross-entropy optimization algorithm is applied to the chaotic oscillator network model to learn the optimal feedback gains for the external controller instead of hand-tuning the gains to provide sufficient control to the pathological brain and prevent seizure generation. The correlation between the dynamics of neural activity within different brain sites is calculated for experimental data to show similar dynamics of epileptic neural activity as simulated by the network of chaotic oscillators.

  19. Using deep neural networks to augment NIF post-shot analysis

    NASA Astrophysics Data System (ADS)

    Humbird, Kelli; Peterson, Luc; McClarren, Ryan; Field, John; Gaffney, Jim; Kruse, Michael; Nora, Ryan; Spears, Brian

    2017-10-01

    Post-shot analysis of National Ignition Facility (NIF) experiments is the process of determining which simulation inputs yield results consistent with experimental observations. This analysis is typically accomplished by running suites of manually adjusted simulations, or Monte Carlo sampling surrogate models that approximate the response surfaces of the physics code. These approaches are expensive and often find simulations that match only a small subset of observables simultaneously. We demonstrate an alternative method for performing post-shot analysis using inverse models, which map directly from experimental observables to simulation inputs with quantified uncertainties. The models are created using a novel machine learning algorithm which automates the construction and initialization of deep neural networks to optimize predictive accuracy. We show how these neural networks, trained on large databases of post-shot simulations, can rigorously quantify the agreement between simulation and experiment. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  20. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  1. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  2. GTE blade injection moulding modeling and verification of models during process approbation

    NASA Astrophysics Data System (ADS)

    Stepanenko, I. S.; Khaimovich, A. I.

    2017-02-01

    The simulation model for filling the mould was developed using Moldex3D, and it was experimentally verified in order to perform further optimization calculations of the moulding process conditions. The method described in the article allows adjusting the finite-element model by minimizing the airfoil profile difference between the design and experimental melt motion front due to the differentiated change of power supplied to heating elements, which heat the injection mould in simulation. As a result of calibrating the injection mould for the gas-turbine engine blade, the mean difference between the design melt motion profile and the experimental airfoil profile of no more than 4% was achieved.

  3. Simulation of Optimal Decision-Making Under the Impacts of Climate Change.

    PubMed

    Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl

    2017-07-01

    Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.

  4. A Systems Approach to Designing Effective Clinical Trials Using Simulations

    PubMed Central

    Fusaro, Vincent A.; Patil, Prasad; Chi, Chih-Lin; Contant, Charles F.; Tonellato, Peter J.

    2013-01-01

    Background Pharmacogenetics in warfarin clinical trials have failed to show a significant benefit compared to standard clinical therapy. This study demonstrates a computational framework to systematically evaluate pre-clinical trial design of target population, pharmacogenetic algorithms, and dosing protocols to optimize primary outcomes. Methods and Results We programmatically created an end-to-end framework that systematically evaluates warfarin clinical trial designs. The framework includes options to create a patient population, multiple dosing strategies including genetic-based and non-genetic clinical-based, multiple dose adjustment protocols, pharmacokinetic/pharmacodynamics (PK/PD) modeling and international normalization ratio (INR) prediction, as well as various types of outcome measures. We validated the framework by conducting 1,000 simulations of the CoumaGen clinical trial primary endpoints. The simulation predicted a mean time in therapeutic range (TTR) of 70.6% and 72.2% (P = 0.47) in the standard and pharmacogenetic arms, respectively. Then, we evaluated another dosing protocol under the same original conditions and found a significant difference in TTR between the pharmacogenetic and standard arm (78.8% vs. 73.8%; P = 0.0065), respectively. Conclusions We demonstrate that this simulation framework is useful in the pre-clinical assessment phase to study and evaluate design options and provide evidence to optimize the clinical trial for patient efficacy and reduced risk. PMID:23261867

  5. The Fusion of Membranes and Vesicles: Pathway and Energy Barriers from Dissipative Particle Dynamics

    PubMed Central

    Grafmüller, Andrea; Shillcock, Julian; Lipowsky, Reinhard

    2009-01-01

    The fusion of lipid bilayers is studied with dissipative particle dynamics simulations. First, to achieve control over membrane properties, the effects of individual simulation parameters are studied and optimized. Then, a large number of fusion events for a vesicle and a planar bilayer are simulated using the optimized parameter set. In the observed fusion pathway, configurations of individual lipids play an important role. Fusion starts with individual lipids assuming a splayed tail configuration with one tail inserted in each membrane. To determine the corresponding energy barrier, we measure the average work for interbilayer flips of a lipid tail, i.e., the average work to displace one lipid tail from one bilayer to the other. This energy barrier is found to depend strongly on a certain dissipative particle dynamics parameter, and, thus, can be adjusted in the simulations. Overall, three subprocesses have been identified in the fusion pathway. Their energy barriers are estimated to lie in the range 8–15 kBT. The fusion probability is found to possess a maximum at intermediate tension values. As one decreases the tension, the fusion probability seems to vanish before the tensionless membrane state is attained. This would imply that the tension has to exceed a certain threshold value to induce fusion. PMID:19348749

  6. Optimization of Borehole Thermal Energy Storage System Design Using Comprehensive Coupled Simulation Models

    NASA Astrophysics Data System (ADS)

    Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Formhals, Julian; Bär, Kristian; Sass, Ingo

    2017-04-01

    Large-scale borehole thermal energy storage (BTES) is a promising technology in the development of sustainable, renewable and low-emission district heating concepts. Such systems consist of several components and assemblies like the borehole heat exchangers (BHE), other heat sources (e.g. solarthermics, combined heat and power plants, peak load boilers, heat pumps), distribution networks and heating installations. The complexity of these systems necessitates numerical simulations in the design and planning phase. Generally, the subsurface components are simulated separately from the above ground components of the district heating system. However, as fluid and heat are exchanged, the subsystems interact with each other and thereby mutually affect their performances. For a proper design of the overall system, it is therefore imperative to take into account the interdependencies of the subsystems. Based on a TCP/IP communication we have developed an interface for the coupling of a simulation package for heating installations with a finite element software for the modeling of the heat flow in the subsurface and the underground installations. This allows for a co-simulation of all system components, whereby the interaction of the different subsystems is considered. Furthermore, the concept allows for a mathematical optimization of the components and the operational parameters. Consequently, a finer adjustment of the system can be ensured and a more precise prognosis of the system's performance can be realized.

  7. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  8. New method for springback compensation for the stamping of sheet metal components

    NASA Astrophysics Data System (ADS)

    Birkert, A.; Hartmann, B.; Straub, M.

    2017-09-01

    The need for car body structures of higher strength and at the same time lower weight results in serious challenges for the stamping process. Especially the use of high strength steel and aluminium sheets is causing growing problems with regard to elastic springback. To produce accurate parts the stamping dies must be adjusted more or less by the amount of the springback in the opposite direction. For this purpose well-known software solutions use the Displacement Adjustment Method or algorithms which are closely based on that method. A crucial issue of this method is that the generated die surfaces deviate from those of the target geometry with regard to surface area. A new Physical Compensation Method has been developed and validated which takes geometrical nonlinearity into account and creates compensated die geometries with equal-in-area die surfaces. In contrast to the standard mathematical/geometrical approach, the adjusted geometry is generated by a physical approach, which makes use of the virtual part stiffness. Hereby the target geometry is being deformed mechanically in a virtual process based on the springback simulation results by applying virtual forces in an additional elastic simulation. By doing so better part dimensions can be obtained in less tool optimization loops.

  9. Quantum versus simulated annealing in wireless interference network optimization.

    PubMed

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-16

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  10. Quantum versus simulated annealing in wireless interference network optimization

    PubMed Central

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-01-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056

  11. Quantum versus simulated annealing in wireless interference network optimization

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  12. Simulation of load traffic and steeped speed control of conveyor

    NASA Astrophysics Data System (ADS)

    Reutov, A. A.

    2017-10-01

    The article examines the possibilities of the step control simulation of conveyor speed within Mathcad, Simulink, Stateflow software. To check the efficiency of the control algorithms and to more accurately determine the characteristics of the control system, it is necessary to simulate the process of speed control with real values of traffic for a work shift or for a day. For evaluating the belt workload and absence of spillage it is necessary to use empirical values of load flow in a shorter period of time. The analytical formulas for optimal speed step values were received using empirical values of load. The simulation checks acceptability of an algorithm, determines optimal parameters of regulation corresponding to load flow characteristics. The average speed and the number of speed switching during simulation are admitted as criteria of regulation efficiency. The simulation example within Mathcad software is implemented. The average conveyor speed decreases essentially by two-step and three-step control. A further increase in the number of regulatory steps decreases average speed insignificantly but considerably increases the intensity of the speed switching. Incremental algorithm of speed regulation uses different number of stages for growing and reducing load traffic. This algorithm allows smooth control of the conveyor speed changes with monotonic variation of the load flow. The load flow oscillation leads to an unjustified increase or decrease of speed. Work results can be applied at the design of belt conveyors with adjustable drives.

  13. Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.

    PubMed

    Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P

    2017-02-01

    Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm -3 d -1 were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Robust source and mask optimization compensating for mask topography effects in computational lithography.

    PubMed

    Li, Jia; Lam, Edmund Y

    2014-04-21

    Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.

  15. Optimization of diode-pumped doubly QML laser with neodymium-doped vanadate crystals at 1.34 μm

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Jiao, Zhiyong

    2018-05-01

    We present a theoretical model for a diode-pumped, 1.34 μm V3+:YAG laser that is equipped with an acoustic-optic modulator. The model includes the loss introduced by the acoustic-optic modulator combined with the physical properties of the laser resonator, the neodymium-doped vanadate crystals and the output coupler. The parameters are adjusted within a reasonable range to optimize the pulse output characteristics. A typical Q-switched and mode-locked Nd:Lu0.15Y0.85VO4 laser at 1.34 μm with acoustic-optic modulator and V3+:YAG is set up, and the experimental output characteristics are consistent with the theoretical simulation results.

  16. Seller's dilemma due to social interactions between customers

    NASA Astrophysics Data System (ADS)

    Gordon, Mirta B.; Nadal, Jean-Pierre; Phan, Denis; Vannimenus, Jean

    2005-10-01

    In this paper, we consider a discrete choice model where heterogeneous agents are subject to mutual influences. We explore some consequences on the market's behaviour, in the simplest case of a uniform willingness to pay distribution. We exhibit a first-order phase transition in the profit optimization by the monopolist: if the social influence is strong enough, there is a regime where, if the mean willingness to pay increases, or if the production costs decrease, the optimal solution for the monopolist jumps from a solution with a high price and a small number of buyers, to a solution with a low price and a large number of buyers. Depending on the path of prices adjustments by the monopolist, simulations show hysteretic effects on the fraction of buyers.

  17. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  18. Traveling-Wave Tube Efficiency Enhancement

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.

    2011-01-01

    Traveling-wave tubes (TWT's) are used to amplify microwave communication signals on virtually all NASA and commercial spacecraft. Because TWT's are a primary power user, increasing their power efficiency is important for reducing spacecraft weight and cost. NASA Glenn Research Center has played a major role in increasing TWT efficiency over the last thirty years. In particular, two types of efficiency optimization algorithms have been developed for coupled-cavity TWT's. The first is the phase-adjusted taper which was used to increase the RF power from 420 to 1000 watts and the RF efficiency from 9.6% to 22.6% for a Ka-band (29.5 GHz) TWT. This was a record efficiency at this frequency level. The second is an optimization algorithm based on simulated annealing. This improved algorithm is more general and can be used to optimize efficiency over a frequency bandwidth and to provide a robust design for very high frequency TWT's in which dimensional tolerance variations are significant.

  19. High-frequency AC/DC converter with unity power factor and minimum harmonic distortion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wernekinch, E.R.

    1987-01-01

    The power factor is controlled by adjusting the relative position of the fundamental component of an optimized PWM-type voltage with respect to the supply voltage. Current harmonic distortion is minimized by the use of optimized firing angles for the converter at a frequency where GTO's can be used. This feature makes this approach very attractive at power levels of 100 to 600 kW. To obtain the optimized PWM pattern, a steepest descent digital computer algorithm is used. Digital-computer simulations are performed and a low-power model is constructed and tested to verify the concepts and the behavior of the model. Experimentalmore » results show that unity power factor is achieved and that the distortion in the phase currents is 10.4% at 90% of full load. This is less than achievable with sinusoidal PWM, harmonic elimination, hysteresis control, and deadbeat control for the same switching frequency.« less

  20. Optimization of LED light spectrum to enhance colorfulness of illuminated objects with white light constraints.

    PubMed

    Wu, Haining; Dong, Jianfei; Qi, Gaojin; Zhang, Guoqi

    2015-07-01

    Enhancing the colorfulness of illuminated objects is a promising application of LED lighting for commercial, exhibiting, and scientific purposes. This paper proposes a method to enhance the color of illuminated objects for a given polychromatic lamp. Meanwhile, the light color is restricted to white. We further relax the white light constraints by introducing soft margins. Based on the spectral and electrical characteristics of LEDs and object surface properties, we determine the optimal mixing of the LED light spectrum by solving a numerical optimization problem, which is a quadratic fractional programming problem by formulation. Simulation studies show that the trade-off between the white light constraint and the level of the color enhancement can be adjusted by tuning an upper limit value of the soft margin. Furthermore, visual evaluation experiments are performed to evaluate human perception of the color enhancement. The experiments have verified the effectiveness of the proposed method.

  1. Deployable reflector antenna performance optimization using automated surface correction and array-feed compensation

    NASA Technical Reports Server (NTRS)

    Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.

    1992-01-01

    Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.

  2. Field-Based Optimal Placement of Antennas for Body-Worn Wireless Sensors

    PubMed Central

    Januszkiewicz, Łukasz; Di Barba, Paolo; Hausman, Sławomir

    2016-01-01

    We investigate a case of automated energy-budget-aware optimization of the physical position of nodes (sensors) in a Wireless Body Area Network (WBAN). This problem has not been presented in the literature yet, as opposed to antenna and routing optimization, which are relatively well-addressed. In our research, which was inspired by a safety-critical application for firefighters, the sensor network consists of three nodes located on the human body. The nodes communicate over a radio link operating in the 2.4 GHz or 5.8 GHz ISM frequency band. Two sensors have a fixed location: one on the head (earlobe pulse oximetry) and one on the arm (with accelerometers, temperature and humidity sensors, and a GPS receiver), while the position of the third sensor can be adjusted within a predefined region on the wearer’s chest. The path loss between each node pair strongly depends on the location of the nodes and is difficult to predict without performing a full-wave electromagnetic simulation. Our optimization scheme employs evolutionary computing. The novelty of our approach lies not only in the formulation of the problem but also in linking a fully automated optimization procedure with an electromagnetic simulator and a simplified human body model. This combination turns out to be a computationally effective solution, which, depending on the initial placement, has a potential to improve performance of our example sensor network setup by up to about 20 dB with respect to the path loss between selected nodes. PMID:27196911

  3. Optimization of wearable microwave antenna with simplified electromagnetic model of the human body

    NASA Astrophysics Data System (ADS)

    Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir

    2017-12-01

    In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.

  4. Calibration of misalignment errors in the non-null interferometry based on reverse iteration optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xinmu; Hao, Qun; Hu, Yao; Wang, Shaopu; Ning, Yan; Li, Tengfei; Chen, Shufen

    2017-10-01

    With no necessity of compensating the whole aberration introduced by the aspheric surfaces, non-null test has the advantage over null test in applicability. However, retrace error, which is brought by the path difference between the rays reflected from the surface under test (SUT) and the incident rays, is introduced into the measurement and makes up of the residual wavefront aberrations (RWAs) along with surface figure error (SFE), misalignment error and other influences. Being difficult to separate from RWAs, the misalignment error may remain after measurement and it is hard to identify whether it is removed or not. It is a primary task to study the removal of misalignment error. A brief demonstration of digital Moiré interferometric technique is presented and a calibration method for misalignment error on the basis of reverse iteration optimization (RIO) algorithm in non-null test method is addressed. The proposed method operates mostly in the virtual system, and requires no accurate adjustment in the real interferometer, which is of significant advantage in reducing the errors brought by repeating complicated manual adjustment, furthermore improving the accuracy of the aspheric surface test. Simulation verification is done in this paper. The calibration accuracy of the position and attitude can achieve at least a magnitude of 10-5 mm and 0.0056×10-6rad, respectively. The simulation demonstrates that the influence of misalignment error can be precisely calculated and removed after calibration.

  5. Double-adjustment in propensity score matching analysis: choosing a threshold for considering residual imbalance.

    PubMed

    Nguyen, Tri-Long; Collins, Gary S; Spence, Jessica; Daurès, Jean-Pierre; Devereaux, P J; Landais, Paul; Le Manach, Yannick

    2017-04-28

    Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression. We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds. We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions. If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering.

  6. The development of a volume element model for energy systems engineering and integrative thermodynamic optimization

    NASA Astrophysics Data System (ADS)

    Yang, Sam

    The dissertation presents the mathematical formulation, experimental validation, and application of a volume element model (VEM) devised for modeling, simulation, and optimization of energy systems in their early design stages. The proposed model combines existing modeling techniques and experimental adjustment to formulate a reduced-order model, while retaining sufficient accuracy to serve as a practical system-level design analysis and optimization tool. In the VEM, the physical domain under consideration is discretized in space using lumped hexahedral elements (i.e., volume elements), and the governing equations for the variable of interest are applied to each element to quantify diverse types of flows that cross it. Subsequently, a system of algebraic and ordinary differential equations is solved with respect to time and scalar (e.g., temperature, relative humidity, etc.) fields are obtained in both spatial and temporal domains. The VEM is capable of capturing and predicting dynamic physical behaviors in the entire system domain (i.e., at system level), including mutual interactions among system constituents, as well as with their respective surroundings and cooling systems, if any. The VEM is also generalizable; that is, the model can be easily adapted to simulate and optimize diverse systems of different scales and complexity and attain numerical convergence with sufficient accuracy. Both the capability and generalizability of the VEM are demonstrated in the dissertation via thermal modeling and simulation of an Off-Grid Zero Emissions Building, an all-electric ship, and a vapor compression refrigeration (VCR) system. Furthermore, the potential of the VEM as an optimization tool is presented through the integrative thermodynamic optimization of a VCR system, whose results are used to evaluate the trade-offs between various objective functions, namely, coefficient of performance, second law efficiency, pull-down time, and refrigerated space temperature, in both transient and steady-state operations.

  7. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  8. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    USGS Publications Warehouse

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  9. Simulating correction of adjustable optics for an x-ray telescope

    NASA Astrophysics Data System (ADS)

    Aldcroft, Thomas L.; Schwartz, Daniel A.; Reid, Paul B.; Cotroneo, Vincenzo; Davis, William N.

    2012-10-01

    The next generation of large X-ray telescopes with sub-arcsecond resolution will require very thin, highly nested grazing incidence optics. To correct the low order figure errors resulting from initial manufacture, the mounting process, and the effects of going from 1 g during ground alignment to zero g on-orbit, we plan to adjust the shapes via piezoelectric "cells" deposited on the backs of the reflecting surfaces. This presentation investigates how well the corrections might be made. We take a benchmark conical glass element, 410×205 mm, with a 20×20 array of piezoelectric cells 19×9 mm in size. We use finite element analysis to calculate the influence function of each cell. We then simulate the correction via pseudo matrix inversion to calculate the stress to be applied by each cell, considering distortion due to gravity as calculated by finite element analysis, and by putative low order manufacturing distortions described by Legendre polynomials. We describe our algorithm and its performance, and the implications for the sensitivity of the resulting slope errors to the optimization strategy.

  10. Optimism and Pessimism in Social Context: An Interpersonal Perspective on Resilience and Risk

    PubMed Central

    Smith, Timothy W.; Ruiz, John M.; Cundiff, Jenny M.; Baron, Kelly G.; Nealey-Moore, Jill B.

    2016-01-01

    Using the interpersonal perspective, we examined social correlates of dispositional optimism. In Study 1, optimism and pessimism were associated with warm-dominant and hostile-submissive interpersonal styles, respectively, across four samples, and had expected associations with social support and interpersonal stressors. In 300 married couples, Study 2 replicated these findings regarding interpersonal styles, using self-reports and spouse ratings. Optimism-pessimism also had significant actor and partner associations with marital quality. In Study 3 (120 couples), husbands’ and wives’ optimism predicted increases in their own marital adjustment over time, and husbands’ optimism predicted increases in wives’ marital adjustment. Thus, the interpersonal perspective is a useful integrative framework for examining social processes that could contribute to associations of optimism-pessimism with physical health and emotional adjustment. PMID:27840458

  11. The Analysis of Fixed Final State Optimal Control in Bilinear System Applied to Bone Marrow by Cell-Cycle Specific (CCS) Chemotherapy

    NASA Astrophysics Data System (ADS)

    Rainarli, E.; E Dewi, K.

    2017-04-01

    The research conducted by Fister & Panetta shown an optimal control model of bone marrow cells against Cell Cycle Specific chemotherapy drugs. The model used was a bilinear system model. Fister & Panetta research has proved existence, uniqueness, and characteristics of optimal control (the chemotherapy effect). However, by using this model, the amount of bone marrow at the final time could achieve less than 50 percent from the amount of bone marrow before given treatment. This could harm patients because the lack of bone marrow cells made the number of leukocytes declining and patients will experience leukemia. This research would examine the optimal control of a bilinear system that applied to fixed final state. It will be used to determine the length of optimal time in administering chemotherapy and kept bone marrow cells on the allowed level at the same time. Before simulation conducted, this paper shows that the system could be controlled by using a theory of Lie Algebra. Afterward, it shows the characteristics of optimal control. Based on the simulation, it indicates that strong chemotherapy drug given in a short time frame is the most optimal condition to keep bone marrow cells spine on the allowed level but still could put playing an effective treatment. It gives preference of the weight of treatment for keeping bone marrow cells. The result of chemotherapy’s effect (u) is not able to reach the maximum value. On the other words, it needs to make adjustments of medicine’s dosage to satisfy the final treatment condition e.g. the number of bone marrow cells should be at the allowed level.

  12. Interrelations of stress, optimism and control in older people's psychological adjustment.

    PubMed

    Bretherton, Susan Jane; McLean, Louise Anne

    2015-06-01

    To investigate the influence of perceived stress, optimism and perceived control of internal states on the psychological adjustment of older adults. The sample consisted of 212 older adults, aged between 58 and 103 (M = 80.42 years, SD = 7.31 years), living primarily in retirement villages in Melbourne, Victoria. Participants completed the Perceived Stress Scale, Life Orientation Test-Revised, Perceived Control of Internal States Scale and the World Health Organisation Quality of Life-Bref. Optimism significantly mediated the relationship between older people's perceived stress and psychological health, and perceived control of internal states mediated the relationships among stress, optimism and psychological health. The variables explained 49% of the variance in older people's psychological adjustment. It is suggested that strategies to improve optimism and perceived control may improve the psychological adjustment of older people struggling to adapt to life's stressors. © 2014 ACOTA.

  13. Sitting biomechanics, part II: optimal car driver's seat and optimal driver's spinal model.

    PubMed

    Harrison, D D; Harrison, S O; Croft, A C; Harrison, D E; Troyanovich, S J

    2000-01-01

    Driving has been associated with signs and symptoms caused by vibrations. Sitting causes the pelvis to rotate backwards and the lumbar lordosis to reduce. Lumbar support and armrests reduce disc pressure and electromyographically recorded values. However, the ideal driver's seat and an optimal seated spinal model have not been described. To determine an optimal automobile seat and an ideal spinal model of a driver. Information was obtained from peer-reviewed scientific journals and texts, automotive engineering reports, and the National Library of Medicine. Driving predisposes vehicle operators to low-back pain and degeneration. The optimal seat would have an adjustable seat back incline of 100 degrees from horizontal, a changeable depth of seat back to front edge of seat bottom, adjustable height, an adjustable seat bottom incline, firm (dense) foam in the seat bottom cushion, horizontally and vertically adjustable lumbar support, adjustable bilateral arm rests, adjustable head restraint with lordosis pad, seat shock absorbers to dampen frequencies in the 1 to 20 Hz range, and linear front-back travel of the seat enabling drivers of all sizes to reach the pedals. The lumbar support should be pulsating in depth to reduce static load. The seat back should be damped to reduce rebounding of the torso in rear-end impacts. The optimal driver's spinal model would be the average Harrison model in a 10 degrees posterior inclining seat back angle.

  14. Predictions of Energy Savings in HVAC Systems by Lumped Models (Preprint)

    DTIC Science & Technology

    2010-04-14

    various control devices into a simulated HVAC system. Con- trols contain a setpoint of 26.7oC. The adjustable damper, variable chiller work input, and variable fanspeed contain values of αP of -1.0, 0.1, and 1.0, respectively. 25 ...Villanova, PA 19085 bCode 985, Naval System Warfare Center, Carderock Division, Philadelphia, PA 19112 Abstract An approach to optimizing the energy...suggest an order of mag- nitude greater energy savings using a variable chiller power control approach compared to control damper and variable-drive

  15. Investigations on KONUS beam dynamics using the pre-stripper drift tube linac at GSI

    NASA Astrophysics Data System (ADS)

    Xiao, C.; Du, X. N.; Groening, L.

    2018-04-01

    Interdigital H-mode (IH) drift tube linacs (DTLs) based on KONUS beam dynamics are very sensitive to the rf-phases and voltages at the gaps between tubes. In order to design these DTLs, a deep understanding of the underlying longitudinal beam dynamics is mandatory. The report presents tracking simulations along an IH-DTL using the PARTRAN and BEAMPATH codes together with MATHCAD and CST. Simulation results illustrate that the beam dynamics design of the pre-stripper IH-DTL at GSI is sensitive to slight deviations of rf-phase and gap voltages with impact to the mean beam energy at the DTL exit. Applying the existing geometrical design, rf-voltages, and rf-phases of the DTL were re-adjusted. In simulations this re-optimized design can provide for more than 90% of transmission of an intense 15 emA beam keeping the reduction of beam brilliance below 25%.

  16. Research of vibration control based on current mode piezoelectric shunt damping circuit

    NASA Astrophysics Data System (ADS)

    Liu, Weiwei; Mao, Qibo

    2017-12-01

    The piezoelectric shunt damping circuit using current mode approach is imposed to control the vibration of a cantilever beam. Firstly, the simulated inductance with large values are designed for the corresponding RL series shunt circuits. Moreover, with an example of cantilever beam, the second natural frequency of the beam is targeted to control for experiment. By adjusting the values of the equivalent inductance and equivalent resistance of the shunt circuit, the optimal damping of the shunt circuit is obtained. Meanwhile, the designed piezoelectric shunt damping circuit stability is experimental verified. Experimental results show that the proposed piezoelectric shunt damping circuit based on current mode circuit has good vibration control performance. However, the control performance will be reduced if equivalent inductance and equivalent resistance values deviate from optimal values.

  17. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  18. Study on optimized decision-making model of offshore wind power projects investment

    NASA Astrophysics Data System (ADS)

    Zhao, Tian; Yang, Shangdong; Gao, Guowei; Ma, Li

    2018-02-01

    China’s offshore wind energy is of great potential and plays an important role in promoting China’s energy structure adjustment. However, the current development of offshore wind power in China is inadequate, and is much less developed than that of onshore wind power. On the basis of considering all kinds of risks faced by offshore wind power development, an optimized model of offshore wind power investment decision is established in this paper by proposing the risk-benefit assessment method. To prove the practicability of this method in improving the selection of wind power projects, python programming is used to simulate the investment analysis of a large number of projects. Therefore, the paper is dedicated to provide decision-making support for the sound development of offshore wind power industry.

  19. Consideration of Optimal Input on Semi-Active Shock Control System

    NASA Astrophysics Data System (ADS)

    Kawashima, Takeshi

    In press working, unidirectional transmission of mechanical energy is expected in order to maximize the life of the dies. To realize this transmission, the author has developed a shock control system based on the sliding mode control technique. The controller makes a collision-receiving object effectively deform plastically by adjusting the force of the actuator inserted between the colliding objects, while the deformation of the colliding object is held at the necessity minimum. However, the actuator has to generate a large force corresponding to the impulsive force. Therefore, development of such an actuator is a formidable challenge. The author has proposed a semi-active shock control system in which the impulsive force is adjusted by a brake mechanism, although the system exhibits inferior performance. Thus, the author has also designed an actuator using a friction device for semi-active shock control, and proposed an active seatbelt system as an application. The effectiveness has been confirmed by a numerical simulation and model experiment. In this study, the optimal deformation change of the colliding object is theoretically examined in the case that the collision-receiving object has perfect plasticity and the colliding object has perfect elasticity. As a result, the optimal input condition is obtained so that the ratio of the maximum deformation of the collision-receiving object to the maximum deformation of the colliding object becomes the maximum. Additionally, the energy balance is examined.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Zhou; H. Huang; M. Deo

    Log and seismic data indicate that most shale formations have strong heterogeneity. Conventional analytical and semi-analytical fracture models are not enough to simulate the complex fracture propagation in these highly heterogeneous formation. Without considering the intrinsic heterogeneity, predicted morphology of hydraulic fracture may be biased and misleading in optimizing the completion strategy. In this paper, a fully coupling fluid flow and geomechanics hydraulic fracture simulator based on dual-lattice Discrete Element Method (DEM) is used to predict the hydraulic fracture propagation in heterogeneous reservoir. The heterogeneity of rock is simulated by assigning different material force constant and critical strain to differentmore » particles and is adjusted by conditioning to the measured data and observed geological features. Based on proposed model, the effects of heterogeneity at different scale on micromechanical behavior and induced macroscopic fractures are examined. From the numerical results, the microcrack will be more inclined to form at the grain weaker interface. The conventional simulator with homogeneous assumption is not applicable for highly heterogeneous shale formation.« less

  1. Optimizing radiotherapy protocols using computer automata to model tumour cell death as a function of oxygen diffusion processes.

    PubMed

    Paul-Gilloteaux, Perrine; Potiron, Vincent; Delpon, Grégory; Supiot, Stéphane; Chiavassa, Sophie; Paris, François; Costes, Sylvain V

    2017-05-23

    The concept of hypofractionation is gaining momentum in radiation oncology centres, enabled by recent advances in radiotherapy apparatus. The gain of efficacy of this innovative treatment must be defined. We present a computer model based on translational murine data for in silico testing and optimization of various radiotherapy protocols with respect to tumour resistance and the microenvironment heterogeneity. This model combines automata approaches with image processing algorithms to simulate the cellular response of tumours exposed to ionizing radiation, modelling the alteration of oxygen permeabilization in blood vessels against repeated doses, and introducing mitotic catastrophe (as opposed to arbitrary delayed cell-death) as a means of modelling radiation-induced cell death. Published data describing cell death in vitro as well as tumour oxygenation in vivo are used to inform parameters. Our model is validated by comparing simulations to in vivo data obtained from the radiation treatment of mice transplanted with human prostate tumours. We then predict the efficacy of untested hypofractionation protocols, hypothesizing that tumour control can be optimized by adjusting daily radiation dosage as a function of the degree of hypoxia in the tumour environment. Further biological refinement of this tool will permit the rapid development of more sophisticated strategies for radiotherapy.

  2. Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico

    USGS Publications Warehouse

    Knutilla, R.L.; Veenhuis, J.E.

    1994-01-01

    Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.

  3. Image quality comparison between single energy and dual energy CT protocols for hepatic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Yuan, E-mail: yuanyao@stanford.edu; Pelc, Nor

    Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition maymore » be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings. Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study. Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images. Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.« less

  4. Mechano-electrical feedback explains T-wave morphology and optimizes cardiac pump function: insight from a multi-scale model.

    PubMed

    Hermeling, Evelien; Delhaas, Tammo; Prinzen, Frits W; Kuijpers, Nico H L

    2012-01-01

    In the ECG, T- and R-wave are concordant during normal sinus rhythm (SR), but discordant after a period of ventricular pacing (VP). Experiments showed that the latter phenomenon, called T-wave memory, is mediated by a mechanical stimulus. By means of a mathematical model, we investigated the hypothesis that slow acting mechano-electrical feedback (MEF) explains T-wave memory. In our model, electromechanical behavior of the left ventricle (LV) was simulated using a series of mechanically and electrically coupled segments. Each segment comprised ionic membrane currents, calcium handling, and excitation-contraction coupling. MEF was incorporated by locally adjusting conductivity of L-type calcium current (g(CaL)) to local external work. In our set-up, g(CaL) could vary up to 25%, 50%, 100% or unlimited amount around its default value. Four consecutive simulations were performed: normal SR (with MEF), acute VP, sustained VP (with MEF), and acutely restored SR. MEF led to T-wave concordance in normal SR and to discordant T-waves acutely after restoring SR. Simulated ECGs with a maximum of 25-50% adaptation closely resembled those during T-wave memory experiments in vivo and also provided the best compromise between optimal systolic and diastolic function. In conclusion, these simulation results indicate that slow acting MEF in the LV can explain a) the relatively small differences in systolic shortening and mechanical work during SR, b) the small dispersion in repolarization time, c) the concordant T-wave during SR, and d) T-wave memory. The physiological distribution in electrophysiological properties, reflected by the concordant T-wave, may serve to optimize cardiac pump function. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Modeling the influence of the VV delay for CRT on the electrical activation patterns in absence of conduction through the AV node

    NASA Astrophysics Data System (ADS)

    Romero, D. A.; Sebastián, Rafael; Plank, Gernot; Vigmond, Edward J.; Frangi, Alejandro F.

    2008-03-01

    From epidemiological studies, it has been shown that 0.2% of men and 0.1% of women suffer from a degree of atrioventricular (AV) block. In recent years, the palliative treatment for third degree AV block has included Cardiac Resynchronization Therapy (CRT). It was found that patients show more clinical improvement in the long term with CRT compared with single chamber devices. Still, an important group of patients does not improve their hemodynamic function as much as could be expected. A better understanding of the basis for optimizing the devices settings (among which the VV delay) will help to increase the number of responders. In this work, a finite element model of the left and right ventricles was generated using an atlas-based approach for their segmentation, which includes fiber orientation. The electrical activity was simulated with the electrophysiological solver CARP, using the Ten Tusscher et al. ionic model for the myocardium, and the DiFrancesco-Noble for Purkinje fibers. The model is representative of a patient without dilated or ischemic cardiomyopathy. The simulation results were analyzed for total activation times and latest activated regions at different VV delays and pre-activations (RV pre-activated, LV pre-activated). To optimize the solution, simulations are compared against the His-Purkinje network activation (normal physiological conduction), and interventricular septum activation (as collision point for the two wave fronts). The results were analyzed using Pearson's coefficient of correlation for point to point comparisons between simulation cases. The results of this study contribute to gain insight on the VV delay and how its adjustment might influence response to CRT and how it can be used to optimize the treatment.

  6. Calibrating the orientation between a microlens array and a sensor based on projective geometry

    NASA Astrophysics Data System (ADS)

    Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan

    2016-07-01

    We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.

  7. Proximity matching for ArF and KrF scanners

    NASA Astrophysics Data System (ADS)

    Kim, Young Ki; Pohling, Lua; Hwee, Ng Teng; Kim, Jeong Soo; Benyon, Peter; Depre, Jerome; Hong, Jongkyun; Serebriakov, Alexander

    2009-03-01

    There are many IC-manufacturers over the world that use various exposure systems and work with very high requirements in order to establish and maintain stable lithographic processes of 65 nm, 45 nm and below. Once the process is established, manufacturer desires to be able to run it on different tools that are available. This is why the proximity matching plays a key role to maximize tools utilization in terms of productivity for different types of exposure tools. In this paper, we investigate the source of errors that cause optical proximity mismatch and evaluate several approaches for proximity matching of different types of 193 nm and 248 nm scanner systems such as set-get sigma calibration, contrast adjustment, and, finally, tuning imaging parameters by optimization with Manual Scanner Matcher. First, to monitor the proximity mismatch, we collect CD measurement data for the reference tool and for the tool-to-be-matched. Normally, the measurement is performed for a set of line or space through pitch structures. Secondly, by simulation or experiment, we determine the sensitivity of the critical structures with respect to small adjustment of exposure settings such as NA, sigma inner, sigma outer, dose, focus scan range etc. that are called 'proximity tuning knobs'. Then, with the help of special optimization software, we compute the proximity knob adjustment that has to be applied to the tool-to-be-matched to match the reference tool. Finally, we verify successful matching by exposing on the tool-to-be-matched with tuned exposure settings. This procedure is applicable for inter- and intra scanner type matching, but possibly also for process transfers to the design targets. In order to illustrate the approach we show experimental data as well as results of imaging simulations. The set demonstrate successful matching of critical structures for ArF scanners of different tool generations.

  8. Adaptive grid based multi-objective Cauchy differential evolution for stochastic dynamic economic emission dispatch with wind power uncertainty

    PubMed Central

    Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng

    2017-01-01

    Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262

  9. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  10. Ultra-wideband, Wide Angle and Polarization-insensitive Specular Reflection Reduction by Metasurface based on Parameter-adjustable Meta-Atoms

    PubMed Central

    Su, Jianxun; Lu, Yao; Zhang, Hui; Li, Zengrui; (Lamar) Yang, Yaoqing; Che, Yongxing; Qi, Kainan

    2017-01-01

    In this paper, an ultra-wideband, wide angle and polarization-insensitive metasurface is designed, fabricated, and characterized for suppressing the specular electromagnetic wave reflection or backward radar cross section (RCS). Square ring structure is chosen as the basic meta-atoms. A new physical mechanism based on size adjustment of the basic meta-atoms is proposed for ultra-wideband manipulation of electromagnetic (EM) waves. Based on hybrid array pattern synthesis (APS) and particle swarm optimization (PSO) algorithm, the selection and distribution of the basic meta-atoms are optimized simultaneously to obtain the ultra-wideband diffusion scattering patterns. The metasurface can achieve an excellent RCS reduction in an ultra-wide frequency range under x- and y-polarized normal incidences. The new proposed mechanism greatly extends the bandwidth of RCS reduction. The simulation and experiment results show the metasurface can achieve ultra-wideband and polarization-insensitive specular reflection reduction for both normal and wide-angle incidences. The proposed methodology opens up a new route for realizing ultra-wideband diffusion scattering of EM wave, which is important for stealth and other microwave applications in the future. PMID:28181593

  11. Ultra-wideband, Wide Angle and Polarization-insensitive Specular Reflection Reduction by Metasurface based on Parameter-adjustable Meta-Atoms.

    PubMed

    Su, Jianxun; Lu, Yao; Zhang, Hui; Li, Zengrui; Lamar Yang, Yaoqing; Che, Yongxing; Qi, Kainan

    2017-02-09

    In this paper, an ultra-wideband, wide angle and polarization-insensitive metasurface is designed, fabricated, and characterized for suppressing the specular electromagnetic wave reflection or backward radar cross section (RCS). Square ring structure is chosen as the basic meta-atoms. A new physical mechanism based on size adjustment of the basic meta-atoms is proposed for ultra-wideband manipulation of electromagnetic (EM) waves. Based on hybrid array pattern synthesis (APS) and particle swarm optimization (PSO) algorithm, the selection and distribution of the basic meta-atoms are optimized simultaneously to obtain the ultra-wideband diffusion scattering patterns. The metasurface can achieve an excellent RCS reduction in an ultra-wide frequency range under x- and y-polarized normal incidences. The new proposed mechanism greatly extends the bandwidth of RCS reduction. The simulation and experiment results show the metasurface can achieve ultra-wideband and polarization-insensitive specular reflection reduction for both normal and wide-angle incidences. The proposed methodology opens up a new route for realizing ultra-wideband diffusion scattering of EM wave, which is important for stealth and other microwave applications in the future.

  12. Dynamic Resource Allocation and Access Class Barring Scheme for Delay-Sensitive Devices in Machine to Machine (M2M) Communications.

    PubMed

    Li, Ning; Cao, Chao; Wang, Cong

    2017-06-15

    Supporting simultaneous access of machine-type devices is a critical challenge in machine-to-machine (M2M) communications. In this paper, we propose an optimal scheme to dynamically adjust the Access Class Barring (ACB) factor and the number of random access channel (RACH) resources for clustered machine-to-machine (M2M) communications, in which Delay-Sensitive (DS) devices coexist with Delay-Tolerant (DT) ones. In M2M communications, since delay-sensitive devices share random access resources with delay-tolerant devices, reducing the resources consumed by delay-sensitive devices means that there will be more resources available to delay-tolerant ones. Our goal is to optimize the random access scheme, which can not only satisfy the requirements of delay-sensitive devices, but also take the communication quality of delay-tolerant ones into consideration. We discuss this problem from the perspective of delay-sensitive services by adjusting the resource allocation and ACB scheme for these devices dynamically. Simulation results show that our proposed scheme realizes good performance in satisfying the delay-sensitive services as well as increasing the utilization rate of the random access resources allocated to them.

  13. Insights into the use of time-lapse GPR data as observations for inverse multiphase flow simulations of DNAPL migration

    USGS Publications Warehouse

    Johnson, R.H.; Poeter, E.P.

    2007-01-01

    Perchloroethylene (PCE) saturations determined from GPR surveys were used as observations for inversion of multiphase flow simulations of a PCE injection experiment (Borden 9??m cell), allowing for the estimation of optimal bulk intrinsic permeability values. The resulting fit statistics and analysis of residuals (observed minus simulated PCE saturations) were used to improve the conceptual model. These improvements included adjustment of the elevation of a permeability contrast, use of the van Genuchten versus Brooks-Corey capillary pressure-saturation curve, and a weighting scheme to account for greater measurement error with larger saturation values. A limitation in determining PCE saturations through one-dimensional GPR modeling is non-uniqueness when multiple GPR parameters are unknown (i.e., permittivity, depth, and gain function). Site knowledge, fixing the gain function, and multiphase flow simulations assisted in evaluating non-unique conceptual models of PCE saturation, where depth and layering were reinterpreted to provide alternate conceptual models. Remaining bias in the residuals is attributed to the violation of assumptions in the one-dimensional GPR interpretation (which assumes flat, infinite, horizontal layering) resulting from multidimensional influences that were not included in the conceptual model. While the limitations and errors in using GPR data as observations for inverse multiphase flow simulations are frustrating and difficult to quantify, simulation results indicate that the error and bias in the PCE saturation values are small enough to still provide reasonable optimal permeability values. The effort to improve model fit and reduce residual bias decreases simulation error even for an inversion based on biased observations and provides insight into alternate GPR data interpretations. Thus, this effort is warranted and provides information on bias in the observation data when this bias is otherwise difficult to assess. ?? 2006 Elsevier B.V. All rights reserved.

  14. Application of Layered Perforation Profile Control Technique to Low Permeable Reservoir

    NASA Astrophysics Data System (ADS)

    Wei, Sun

    2018-01-01

    it is difficult to satisfy the demand of profile control of complex well section and multi-layer reservoir by adopting the conventional profile control technology, therefore, a research is conducted on adjusting the injection production profile with layered perforating parameters optimization. i.e. in the case of coproduction for multi-layer, water absorption of each layer is adjusted by adjusting the perforating parameters, thus to balance the injection production profile of the whole well section, and ultimately enhance the oil displacement efficiency of water flooding. By applying the relationship between oil-water phase percolation theory/perforating damage and capacity, a mathematic model of adjusting the injection production profile with layered perforating parameters optimization, besides, perforating parameters optimization software is programmed. Different types of optimization design work are carried out according to different geological conditions and construction purposes by using the perforating optimization design software; furthermore, an application test is done for low permeable reservoir, and the water injection profile tends to be balanced significantly after perforation with optimized parameters, thereby getting a good application effect on site.

  15. Partnership for Edge Physics (EPSI), University of Texas Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, Robert; Carey, Varis; Michoski, Craig

    Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less

  16. Resource planning and scheduling of payload for satellite with particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Jian; Wang, Cheng

    2007-11-01

    The resource planning and scheduling technology of payload is a key technology to realize an automated control for earth observing satellite with limited resources on satellite, which is implemented to arrange the works states of various payloads to carry out missions by optimizing the scheme of the resources. The scheduling task is a difficult constraint optimization problem with various and mutative requests and constraints. Based on the analysis of the satellite's functions and the payload's resource constraints, a proactive planning and scheduling strategy based on the availability of consumable and replenishable resources in time-order is introduced along with dividing the planning and scheduling period to several pieces. A particle swarm optimization algorithm is proposed to address the problem with an adaptive mutation operator selection, where the swarm is divided into groups with different probabilities to employ various mutation operators viz., differential evolution, Gaussian and random mutation operators. The probabilities are adjusted adaptively by comparing the effectiveness of the groups to select a proper operator. The simulation results have shown the feasibility and effectiveness of the method.

  17. Network congestion control algorithm based on Actor-Critic reinforcement learning model

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen

    2018-04-01

    Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.

  18. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Zhang, Xuejun; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  19. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  20. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  1. CT dose minimization using personalized protocol optimization and aggressive bowtie

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Yin, Zhye; Jin, Yannan; Wu, Mingye; Yao, Yangyang; Tao, Kun; Kalra, Mannudeep K.; De Man, Bruno

    2016-03-01

    In this study, we propose to use patient-specific x-ray fluence control to reduce the radiation dose to sensitive organs while still achieving the desired image quality (IQ) in the region of interest (ROI). The mA modulation profile is optimized view by view, based on the sensitive organs and the ROI, which are obtained from an ultra-low-dose volumetric CT scout scan [1]. We use a clinical chest CT scan to demonstrate the feasibility of the proposed concept: the breast region is selected as the sensitive organ region while the cardiac region is selected as IQ ROI. Two groups of simulations are performed based on the clinical CT dataset: (1) a constant mA scan adjusted based on the patient attenuation (120 kVp, 300 mA), which serves as baseline; (2) an optimized scan with aggressive bowtie and ROI centering combined with patient-specific mA modulation. The results shows that the combination of the aggressive bowtie and the optimized mA modulation can result in 40% dose reduction in the breast region, while the IQ in the cardiac region is maintained. More generally, this paper demonstrates the general concept of using a 3D scout scan for optimal scan planning.

  2. Performance improvement of an active vibration absorber subsystem for an aircraft model using a bees algorithm based on multi-objective intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zarchi, Milad; Attaran, Behrooz

    2017-11-01

    This study develops a mathematical model to investigate the behaviour of adaptable shock absorber dynamics for the six-degree-of-freedom aircraft model in the taxiing phase. The purpose of this research is to design a proportional-integral-derivative technique for control of an active vibration absorber system using a hydraulic nonlinear actuator based on the bees algorithm. This optimization algorithm is inspired by the natural intelligent foraging behaviour of honey bees. The neighbourhood search strategy is used to find better solutions around the previous one. The parameters of the controller are adjusted by minimizing the aircraft's acceleration and impact force as the multi-objective function. The major advantages of this algorithm over other optimization algorithms are its simplicity, flexibility and robustness. The results of the numerical simulation indicate that the active suspension increases the comfort of the ride for passengers and the fatigue life of the structure. This is achieved by decreasing the impact force, displacement and acceleration significantly.

  3. Compensating temperature-induced ultrasonic phase and amplitude changes

    NASA Astrophysics Data System (ADS)

    Gong, Peng; Hay, Thomas R.; Greve, David W.; Junker, Warren R.; Oppenheim, Irving J.

    2016-04-01

    In ultrasonic structural health monitoring (SHM), environmental and operational conditions, especially temperature, can significantly affect the propagation of ultrasonic waves and thus degrade damage detection. Typically, temperature effects are compensated using optimal baseline selection (OBS) or optimal signal stretch (OSS). The OSS method achieves compensation by adjusting phase shifts caused by temperature, but it does not fully compensate phase shifts and it does not compensate for accompanying signal amplitude changes. In this paper, we develop a new temperature compensation strategy to address both phase shifts and amplitude changes. In this strategy, OSS is first used to compensate some of the phase shifts and to quantify the temperature effects by stretching factors. Based on stretching factors, empirical adjusting factors for a damage indicator are then applied to compensate for the temperature induced remaining phase shifts and amplitude changes. The empirical adjusting factors can be trained from baseline data with temperature variations in the absence of incremental damage. We applied this temperature compensation approach to detect volume loss in a thick wall aluminum tube with multiple damage levels and temperature variations. Our specimen is a thick-walled short tube, with dimensions closely comparable to the outlet region of a frac iron elbow where flow-induced erosion produces the volume loss that governs the service life of that component, and our experimental sequence simulates the erosion process by removing material in small damage steps. Our results show that damage detection is greatly improved when this new temperature compensation strategy, termed modified-OSS, is implemented.

  4. Research on hybrid transmission mode for HVDC with optimal thermal power and renewable energy combination

    NASA Astrophysics Data System (ADS)

    Zhang, Jinfang; Yan, Xiaoqing; Wang, Hongfu

    2018-02-01

    With the rapid development of renewable energy in Northwest China, curtailment phenomena is becoming more and more serve owing to lack of adjustment ability and enough transmission capacity. Based on the existing HVDC projects, exploring the hybrid transmission mode associated with thermal power and renewable power will be necessary and important. This paper has proposed a method on optimal thermal power and renewable energy combination for HVDC lines, based on multi-scheme comparison. Having established the mathematic model for electric power balance in time series mode, ten different schemes have been picked for figuring out the suitable one by test simulation. By the proposed related discriminated principle, including generation device utilization hours, renewable energy electricity proportion and curtailment level, the recommendation scheme has been found. The result has also validated the efficiency of the method.

  5. Charge transfer efficiency improvement of 4T pixel for high speed CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Jin, Xiangliang; Liu, Weihui; Yang, Hongjiao; Tang, Lizhen; Yang, Jia

    2015-03-01

    The charge transfer efficiency improvement method is proposed by optimizing the electrical potential distribution along the transfer path from the PPD to the FD. In this work, we present a non-uniform doped transfer transistor channel, with the adjustments to the overlap length between the CPIA layer and the transfer gate, and the overlap length between the SEN layer and transfer gate. Theory analysis and TCAD simulation results show that the density of the residual charge reduces from 1e11 /cm3 to 1e9 /cm3, and the transfer time reduces from 500 ns to 143 ns, and the charge transfer efficiency is about 77 e-/ns. This optimizing design effectively improves the charge transfer efficiency of 4T pixel and the performance of 4T high speed CMOS image sensor.

  6. Control strategies for wind farm power optimization: LES study

    NASA Astrophysics Data System (ADS)

    Ciri, Umberto; Rotea, Mario; Leonardi, Stefano

    2017-11-01

    Turbines in wind farms operate in off-design conditions as wake interactions occur for particular wind directions. Advanced wind farm control strategies aim at coordinating and adjusting turbine operations to mitigate power losses in such conditions. Coordination is achieved by controlling on upstream turbines either the wake intensity, through the blade pitch angle or the generator torque, or the wake direction, through yaw misalignment. Downstream turbines can be adapted to work in waked conditions and limit power losses, using the blade pitch angle or the generator torque. As wind conditions in wind farm operations may change significantly, it is difficult to determine and parameterize the variations of the coordinated optimal settings. An alternative is model-free control and optimization of wind farms, which does not require any parameterization and can track the optimal settings as conditions vary. In this work, we employ a model-free optimization algorithm, extremum-seeking control, to find the optimal set-points of generator torque, blade pitch and yaw angle for a three-turbine configuration. Large-Eddy Simulations are used to provide a virtual environment to evaluate the performance of the control strategies under realistic, unsteady incoming wind. This work was supported by the National Science Foundation, Grants No. 1243482 (the WINDINSPIRE project) and IIP 1362033 (I/UCRC WindSTAR). TACC is acknowledged for providing computational time.

  7. Numerical study of Si nanoparticle formation by SiCl4 hydrogenation in RF plasma

    NASA Astrophysics Data System (ADS)

    Rehmet, Christophe; Cao, Tengfei; Cheng, Yi

    2016-04-01

    Nanocrystalline silicon (nc-Si) is a promising material for many applications related to electronics and optoelectronics. This work performs numerical simulations in order to understand a new process with high deposition rate production of nc-Si in a radio-frequency plasma reactor. Inductive plasma formation, reaction kinetics and nanoparticle formation have been considered in a sophisticated model. Results show that the plasma parameters could be adjusted in order to improve selectivity between nanoparticle and molecule formation and, thus, the deposition rate. Also, a parametric study helps to optimize the system with appropriate operating conditions.

  8. Theoretical analysis of two nonpolarizing beam splitters in asymmetrical glass cubes.

    PubMed

    Shi, Jin Hui; Wang, Zheng Ping

    2008-05-01

    The design principle for a nonpolarizing beam splitter based on the Brewster condition in a cube is introduced. Nonpolarizing beam splitters in an asymmetrical glass cube are proposed and theoretically investigated, and applied examples are given. To realize 50% reflectance and 50% transmittance at specified wavelengths for both polarization components with an error of less than 2%, two measurements are taken by adjusting the refractive index of the substrate material and optimizing the thicknesses of each film in the design procedures. The simulated results show that the targets are achieved using the method reported here.

  9. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  10. Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.

    PubMed

    Jung, Young-Ho; Choi, Jihoon

    2017-02-25

    A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.

  11. Applying operations research to optimize a novel population management system for cancer screening.

    PubMed

    Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J

    2014-02-01

    To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management.

  12. A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni-Mn-Ga

    NASA Astrophysics Data System (ADS)

    Faran, Eilon; Shilo, Doron

    2016-09-01

    The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni-Mn-Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni-Mn-Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni-Mn-Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.

  13. Optimization of 6LiF:ZnS(Ag) scintillator light yield using GEANT4

    NASA Astrophysics Data System (ADS)

    Yehuda-Zada, Y.; Pritchard, K.; Ziegler, J. B.; Cooksey, C.; Siebein, K.; Jackson, M.; Hurlbut, C.; Kadmon, Y.; Cohen, Y.; Ibberson, R. M.; Majkrzak, C. F.; Maliszewskyj, N. C.; Orion, I.; Osovizky, A.

    2018-06-01

    A new cold neutron detector has been developed at the NIST Center for Neutron Research (NCNR) for the CANDoR (Chromatic Analysis Neutron Diffractometer or Reflectometer) project. Geometric and performance constraints dictate that this detector be exceptionally thin (∼ 2 mm). For this reason, the design of the detector consists of a 6LiF:ZnS(Ag) scintillator with embedded wavelength shifting (WLS) fibers. We used the GEANT4 package to simulate neutron capture and light transport in the detector to optimize the composition and arrangement of materials to satisfy the competing requirements of high neutron capture probability and light production and transport. In the process, we have developed a method for predicting light collection and total neutron detection efficiency for different detector configurations. The simulation was performed by adjusting crucial parameters such as the scintillator stoichiometry, light yield, component grain size, WLS fiber geometry, and reflectors at the outside edges of the scintillator volume. Three different detector configurations were fabricated and their test results were correlated with the simulations. Through this correlation we have managed to find a common photon threshold for the different detector configurations which was then used to simulate and predict the efficiencies for many other detector configurations. New detectors that have been fabricated based on simulation results yielding the desired sensitivity of 90% for 3.27 meV (5 Å) cold neutrons. The simulation has proven to be a useful tool by dramatically reducing the development period and the required number of detector prototypes. It can be used to test new designs with different thicknesses and different target neutron energies.

  14. Imitative modeling automatic system Control of steam pressure in the main steam collector with the influence on the main Servomotor steam turbine

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Zverkov, V. P.; Kuzishchin, V. F.; Ryzhkov, O. S.; Sabanin, V. R.

    2017-11-01

    The research and setting results of steam pressure in the main steam collector “Do itself” automatic control system (ACS) with high-speed feedback on steam pressure in the turbine regulating stage are presented. The ACS setup is performed on the simulation model of the controlled object developed for this purpose with load-dependent static and dynamic characteristics and a non-linear control algorithm with pulse control of the turbine main servomotor. A method for tuning nonlinear ACS with a numerical algorithm for multiparametric optimization and a procedure for separate dynamic adjustment of control devices in a two-loop ACS are proposed and implemented. It is shown that the nonlinear ACS adjusted with the proposed method with the regulators constant parameters ensures reliable and high-quality operation without the occurrence of oscillations in the transient processes the operating range of the turbine loads.

  15. A Probabilistic and Highly Efficient Topology Control Algorithm for Underwater Cooperating AUV Networks

    PubMed Central

    Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan

    2017-01-01

    The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power adjustment ratio while improving the network performance. PMID:28471387

  16. A Probabilistic and Highly Efficient Topology Control Algorithm for Underwater Cooperating AUV Networks.

    PubMed

    Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan

    2017-05-04

    The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV's parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power adjustment ratio while improving the network performance.

  17. Individual-based versus aggregate meta-analysis in multi-database studies of pregnancy outcomes: the Nordic example of selective serotonin reuptake inhibitors and venlafaxine in pregnancy.

    PubMed

    Selmer, Randi; Haglund, Bengt; Furu, Kari; Andersen, Morten; Nørgaard, Mette; Zoëga, Helga; Kieler, Helle

    2016-10-01

    Compare analyses of a pooled data set on the individual level with aggregate meta-analysis in a multi-database study. We reanalysed data on 2.3 million births in a Nordic register based cohort study. We compared estimated odds ratios (OR) for the effect of selective serotonin reuptake inhibitors (SSRI) and venlafaxine use in pregnancy on any cardiovascular birth defect and the rare outcome right ventricular outflow tract obstructions (RVOTO). Common covariates included maternal age, calendar year, birth order, maternal diabetes, and co-medication. Additional covariates were added in analyses with country-optimized adjustment. Country adjusted OR (95%CI) for any cardiovascular birth defect in the individual-based pooled analysis was 1.27 (1.17-1.39), 1.17 (1.07-1.27) adjusted for common covariates and 1.15 (1.05-1.26) adjusted for all covariates. In fixed effects meta-analyses pooled OR was 1.29 (1.19-1.41) based on crude country specific ORs, 1.19 (1.09-1.29) adjusted for common covariates, and 1.16 (1.06-1.27) for country-optimized adjustment. In a random effects model the adjusted OR was 1.07 (0.87-1.32). For RVOTO, OR was 1.48 (1.15-1.89) adjusted for all covariates in the pooled data set, and 1.53 (1.19-1.96) after country-optimized adjustment. Country-specific adjusted analyses at the substance level were not possible for RVOTO. Results of fixed effects meta-analysis and individual-based analyses of a pooled dataset were similar in this study on the association of SSRI/venlafaxine and cardiovascular birth defects. Country-optimized adjustment attenuated the estimates more than adjustment for common covariates only. When data are sparse pooled data on the individual level are needed for adjusted analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. CAMS as a tool for human factors research in spaceflight

    NASA Astrophysics Data System (ADS)

    Sauer, Juergen

    2004-01-01

    The paper reviews a number of research studies that were carried out with a PC-based task environment called Cabin Air Management System (CAMS) simulating the operation of a spacecraft's life support system. As CAMS was a multiple task environment, it allowed the measurement of performance at different levels. Four task components of different priority were embedded in the task environment: diagnosis and repair of system faults, maintaining atmospheric parameters in a safe state, acknowledgement of system alarms (reaction time), and keeping a record of critical system resources (prospective memory). Furthermore, the task environment permitted the examination of different task management strategies and changes in crew member state (fatigue, anxiety, mental effort). A major goal of the research programme was to examine how crew members adapted to various forms of sub-optimal working conditions, such as isolation and confinement, sleep deprivation and noise. None of the studies provided evidence for decrements in primary task performance. However, the results showed a number of adaptive responses of crew members to adjust to the different sub-optimal working conditions. There was evidence for adjustments in information sampling strategies (usually reductions in sampling frequency) as a result of unfavourable working conditions. The results also showed selected decrements in secondary task performance. Prospective memory seemed to be somewhat more vulnerable to sub-optimal working conditions than performance on the reaction time task. Finally, suggestions are made for future research with the CAMS environment.

  19. Simulation technology used for risky assessment in deep exploration project in China

    NASA Astrophysics Data System (ADS)

    jiao, J.; Huang, D.; Liu, J.

    2013-12-01

    Deep exploration has been carried out in China for five years in which various heavy duty instruments and equipments are employed for gravity, magnetic, seismic and electromagnetic data prospecting as well as ultra deep drilling rig established for obtaining deep samples, and so on. The deep exploration is a large and complex system engineering crossing multiple subjects with great investment. It is necessary to employ advanced technical means technology for verification, appraisal, and optimization of geographical prospecting equipment development. To reduce risk of the application and exploration, efficient and allegeable management concept and skills have to be enhanced in order to consolidate management measure and workflow to benefit the ambitious project. Therefore, evidence, prediction, evaluation and related decision strategies have to be taken into accouter simultaneously to meet practical scientific requests and technique limits and extendable attempts. Simulation technique is then proposed as a tool that can be used to carry out dynamic test on actual or imagined system. In practice, it is necessary to combine the simulation technique with the instruments and equipment to accomplish R&D tasks. In this paper, simulation technique is introduced into the R&D process of heavy-duty equipment and high-end engineering project technology. Based on the information provided by a drilling group recently, a digital model is constructed by combination of geographical data, 3d visualization, database management, and visual reality technologies together. It result in push ahead a R&D strategy, in which data processing , instrument application, expected result and uncertainty, and even operation workflow effect environment atmosphere are simulated systematically or simultaneously, in order to obtain an optimal consequence as well as equipment updating strategy. The simulation technology is able to adjust, verify, appraise and optimize the primary plan due to changing in the real world or process, which can provide new insight to the equipment to meet requests from application and construction process and facilitates by means of direct perception and understanding of installation, debugging and experimental process of key equipment for deep exploration. Finally, the objective of project cost conservation and risk reduction can be reasonably approached. Risk assessment can be used to quantitatively evaluate the possible degree of the impact. During the research and development stage, information from the installation, debugging and simulation demonstration of the experiment process of the key instrument and equipment are used to evaluate the fatigue and safety of the device. It needs fully understanding the controllable and uncontrollable risk factors during the process, and then adjusting and improving the unsafe risk factors in the risk assessment and prediction. With combination with professional Geo software to process and interpret the environment to obtain evaluation parameters, simulation modeling is more likely close to exploration target which need more details of evaluations. From micro and macro comprehensive angles to safety and risk assessment can be achieved to satisfy the purpose of reducing the risk of equipment development, and to avoid unnecessary loss on the way of the development.

  20. Inverse modeling of surface-water discharge to achieve restoration salinity performance measures in Florida Bay, Florida

    USGS Publications Warehouse

    Swain, E.D.; James, D.E.

    2008-01-01

    The use of numerical modeling to evaluate regional water-management practices involves the simulation of various alternative water-delivery scenarios, which typically are designed intuitively rather than analytically. These scenario simulations are used to analyze how specific water-management practices affect factors such as water levels, flows, and salinities. In lieu of testing a variety of scenario simulations in a trial-and-error manner, an optimization technique may be used to more precisely and directly define good water-management alternatives. A numerical model application in the coastal regions of Florida Bay and Everglades National Park (ENP), representing the surface- and ground-water hydrology for the region, is a good example of a tool used to evaluate restoration scenarios. The Southern Inland and Coastal System (SICS) model simulates this area with a two-dimensional hydrodynamic surface-water model and a three-dimensional ground-water model, linked to represent the interaction of the two systems with salinity transport. This coastal wetland environment is of great interest in restoration efforts, and the SICS model is used to analyze the effects of alternative water-management scenarios. The SICS model is run within an inverse modeling program called UCODE. In this application, UCODE adjusts the regulated inflows to ENP while SICS is run iteratively. UCODE creates parameters that define inflow within an allowable range for the SICS model based on SICS model output statistics, with the objective of matching user-defined target salinities that meet ecosystem restoration criteria. Preliminary results obtained using two different parameterization methods illustrate the ability of the model to achieve the goals of adjusting the range and reducing the variance of salinity values in the target area. The salinity variance in the primary zone of interest was reduced from an original value of 0.509 psu2 to values 0.418 psu2 and 0.342 psu2 using different methods. Simulations with one, two, and three target areas indicate that optimization is limited near model boundaries and the target location nearest the tidal boundary may not be improved. These experiments indicate that this method can be useful for designing water-delivery schemes to achieve certain water-quality objectives. Additionally, this approach avoids much of the intuitive type of experimentation with different flow schemes that has often been used to develop restoration scenarios. ?? 2007 Elsevier B.V. All rights reserved.

  1. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    PubMed

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  2. Finite Element Analysis of Walking Beam of a New Compound Adjustment Balance Pumping Unit

    NASA Astrophysics Data System (ADS)

    Wu, Jufei; Wang, Qian; Han, Yunfei

    2017-12-01

    In this paper, taking the designer of the new compound balance pumping unit beam as our research target, the three-dimensional model is established by Solid Works, the load and the constraint are determined. ANSYS Workbench is used to analyze the tail and the whole of the beam, the stress and deformation are obtained to meet the strength requirements. The finite element simulation and theoretical calculation of the moment of the center axis beam are carried out. The finite element simulation results are compared with the calculated results of the theoretical mechanics model to verify the correctness of the theoretical calculation. Finally, the finite element analysis is consistent with the theoretical calculation results. The theoretical calculation results are preferable, and the bending moment value provides the theoretical reference for the follow-up optimization and research design.

  3. A simulated annealing approach for redesigning a warehouse network problem

    NASA Astrophysics Data System (ADS)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  4. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  5. Locally adaptive parallel temperature accelerated dynamics method

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2010-03-01

    The recently-developed temperature-accelerated dynamics (TAD) method [M. Sørensen and A.F. Voter, J. Chem. Phys. 112, 9599 (2000)] along with the more recently developed parallel TAD (parTAD) method [Y. Shim et al, Phys. Rev. B 76, 205439 (2007)] allow one to carry out non-equilibrium simulations over extended time and length scales. The basic idea behind TAD is to speed up transitions by carrying out a high-temperature MD simulation and then use the resulting information to obtain event times at the desired low temperature. In a typical implementation, a fixed high temperature Thigh is used. However, in general one expects that for each configuration there exists an optimal value of Thigh which depends on the particular transition pathways and activation energies for that configuration. Here we present a locally adaptive high-temperature TAD method in which instead of using a fixed Thigh the high temperature is dynamically adjusted in order to maximize simulation efficiency. Preliminary results of the performance obtained from parTAD simulations of Cu/Cu(100) growth using the locally adaptive Thigh method will also be presented.

  6. Multi-time scale Climate Informed Stochastic Hybrid Simulation-Optimization Model (McISH model) for Multi-Purpose Reservoir System

    NASA Astrophysics Data System (ADS)

    Lu, M.; Lall, U.

    2013-12-01

    In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.

  7. Development and validation of an improved mechanical thorax for simulating cardiopulmonary resuscitation with adjustable chest stiffness and simulated blood flow.

    PubMed

    Eichhorn, Stefan; Spindler, Johannes; Polski, Marcin; Mendoza, Alejandro; Schreiber, Ulrich; Heller, Michael; Deutsch, Marcus Andre; Braun, Christian; Lange, Rüdiger; Krane, Markus

    2017-05-01

    Investigations of compressive frequency, duty cycle, or waveform during CPR are typically rooted in animal research or computer simulations. Our goal was to generate a mechanical model incorporating alternate stiffness settings and an integrated blood flow system, enabling defined, reproducible comparisons of CPR efficacy. Based on thoracic stiffness data measured in human cadavers, such a model was constructed using valve-controlled pneumatic pistons and an artificial heart. This model offers two realistic levels of chest elasticity, with a blood flow apparatus that reflects compressive depth and waveform changes. We conducted CPR at opposing levels of physiologic stiffness, using a LUCAS device, a motor-driven plunger, and a group of volunteers. In high-stiffness mode, blood flow generated by volunteers was significantly less after just 2min of CPR, whereas flow generated by LUCAS device was superior by comparison. Optimal blood flow was obtained via motor-driven plunger, with trapezoidal waveform. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Design and evaluation of a Stochastic Optimal Feed-forward and Feedback Technology (SOFFT) flight control architecture

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.; Proffitt, Melissa S.

    1994-01-01

    This paper describes the design and evaluation of a stochastic optimal feed-forward and feedback technology (SOFFT) control architecture with emphasis on the feed-forward controller design. The SOFFT approach allows the designer to independently design the feed-forward and feedback controllers to meet separate objectives and then integrate the two controllers. The feed-forward controller has been integrated with an existing high-angle-of-attack (high-alpha) feedback controller. The feed-forward controller includes a variable command model with parameters selected to satisfy level 1 flying qualities with a high-alpha adjustment to achieve desired agility guidelines, a nonlinear interpolation approach that scales entire matrices for approximation of the plant model, and equations for calculating feed-forward gains developed for perfect plant-model tracking. The SOFFT design was applied to a nonlinear batch simulation model of an F/A-18 aircraft modified for thrust vectoring. Simulation results show that agility guidelines are met and that the SOFFT controller filters undesired pilot-induced frequencies more effectively during a tracking task than a flight controller that has the same feedback control law but does not have the SOFFT feed-forward control.

  9. An interactive tool for outdoor computer controlled cultivation of microalgae in a tubular photobioreactor system.

    PubMed

    Dormido, Raquel; Sánchez, José; Duro, Natividad; Dormido-Canto, Sebastián; Guinaldo, María; Dormido, Sebastián

    2014-03-06

    This paper describes an interactive virtual laboratory for experimenting with an outdoor tubular photobioreactor (henceforth PBR for short). This virtual laboratory it makes possible to: (a) accurately reproduce the structure of a real plant (the PBR designed and built by the Department of Chemical Engineering of the University of Almería, Spain); (b) simulate a generic tubular PBR by changing the PBR geometry; (c) simulate the effects of changing different operating parameters such as the conditions of the culture (pH, biomass concentration, dissolved O2, inyected CO2, etc.); (d) simulate the PBR in its environmental context; it is possible to change the geographic location of the system or the solar irradiation profile; (e) apply different control strategies to adjust different variables such as the CO2 injection, culture circulation rate or culture temperature in order to maximize the biomass production; (f) simulate the harvesting. In this way, users can learn in an intuitive way how productivity is affected by any change in the design. It facilitates the learning of how to manipulate essential variables for microalgae growth to design an optimal PBR. The simulator has been developed with Easy Java Simulations, a freeware open-source tool developed in Java, specifically designed for the creation of interactive dynamic simulations.

  10. An Interactive Tool for Outdoor Computer Controlled Cultivation of Microalgae in a Tubular Photobioreactor System

    PubMed Central

    Dormido, Raquel; Sánchez, José; Duro, Natividad; Dormido-Canto, Sebastián; Guinaldo, María; Dormido, Sebastián

    2014-01-01

    This paper describes an interactive virtual laboratory for experimenting with an outdoor tubular photobioreactor (henceforth PBR for short). This virtual laboratory it makes possible to: (a) accurately reproduce the structure of a real plant (the PBR designed and built by the Department of Chemical Engineering of the University of Almería, Spain); (b) simulate a generic tubular PBR by changing the PBR geometry; (c) simulate the effects of changing different operating parameters such as the conditions of the culture (pH, biomass concentration, dissolved O2, inyected CO2, etc.); (d) simulate the PBR in its environmental context; it is possible to change the geographic location of the system or the solar irradiation profile; (e) apply different control strategies to adjust different variables such as the CO2 injection, culture circulation rate or culture temperature in order to maximize the biomass production; (f) simulate the harvesting. In this way, users can learn in an intuitive way how productivity is affected by any change in the design. It facilitates the learning of how to manipulate essential variables for microalgae growth to design an optimal PBR. The simulator has been developed with Easy Java Simulations, a freeware open-source tool developed in Java, specifically designed for the creation of interactive dynamic simulations. PMID:24662450

  11. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  12. The Optimal Timing of Hepatitis C Therapy in Transplant Eligible Patients With Child B and C Cirrhosis: A Cost-Effectiveness Analysis.

    PubMed

    Tapper, Elliot B; Hughes, Michael S; Buti, Maria; Dufour, Jean-Francois; Flamm, Steve; Firdoos, Saima; Curry, Michael P; Afdhal, Nezam H

    2017-05-01

    Ledipasvir (LDV)/sofosbuvir (SOF) has demonstrated high efficacy, safety, and tolerability in hepatitis C virus (HCV)-infected patients. There is limited data, however, regarding the optimal timing of therapy in the context of possible liver transplantation (LT). We compared the cost-effectiveness of 12 weeks of HCV therapy before or after LT or nontreatment using a decision analytical microsimulation state-transition model for a simulated cohort of 10 000 patients with HCV Genotype 1 or 4 with Child B or C cirrhosis. All model parameters regarding the efficacy of therapy, adverse events and the effect of therapy on changes in model for end-stage liver disease (MELD) scores were derived from the SOLAR-1 and 2 trials. The simulations were repeated with 10 000 samples from the parameter distributions. The primary outcome was cost (2014 US dollars) per quality adjusted life year. Treatment before LT yielded more quality-adjusted life year for less money than treatment after LT or nontreatment. Treatment before LT was cost-effective in 100% of samples at a willingness-to-pay threshold of US $100 000 in the base-case and when the analysis was restricted to Child B alone, Child C, or MELD > 15. Treatment before transplant was not cost-effective when MELD was 6-10. In sensitivity analyses, the MELD after which treatment before transplant was cost-effective was 13 and the maximum cost of LDV/SOF therapy at which treatment before LT is cost-effective is US $177 381. From a societal perspective, HCV therapy using LDV/SOF with ribavirin before LT is the most cost-effective strategy for patients with decompensated cirrhosis and MELD score greater than 13.

  13. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  14. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  15. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  16. Wavefront Control Toolbox for James Webb Space Telescope Testbed

    NASA Technical Reports Server (NTRS)

    Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin

    2007-01-01

    We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.

  17. Design of advanced ultrasonic transducers for welding devices.

    PubMed

    Parrini, L

    2001-11-01

    A new high frequency ultrasonic transducer has been conceived, designed, prototyped, and tested. In the design phase, an advanced approach was used and established. The method is based on an initial design estimate obtained with finite element method (FEM) simulations. The simulated ultrasonic transducers and resonators are then built and characterized experimentally through laser interferometry and electrical resonance spectra. The comparison of simulation results with experimental data allows the parameters of FEM models to be adjusted and optimized. The achieved FEM simulations exhibit a remarkably high predictive potential and allow full control of the vibration behavior of the transducer. The new transducer is mounted on a wire bonder with a flange whose special geometry was calculated by means of FEM simulations. This flange allows the transducer to be attached on the wire bonder, not only in longitudinal nodes, but also in radial nodes of the ultrasonic field excited in the horn. This leads to a total decoupling of the transducer to the wire bonder, which has not been achieved so far. The new approach to mount ultrasonic transducers on a welding device is of major importance, not only for wire bonding, but also for all high power ultrasound applications and has been patented.

  18. SGC Tests for Influence of Material Composition on Compaction Characteristic of Asphalt Mixtures

    PubMed Central

    Chen, Qun

    2013-01-01

    Compaction characteristic of the surface layer asphalt mixture (13-type gradation mixture) was studied using Superpave gyratory compactor (SGC) simulative compaction tests. Based on analysis of densification curve of gyratory compaction, influence rules of the contents of mineral aggregates of all sizes and asphalt on compaction characteristic of asphalt mixtures were obtained. SGC Tests show that, for the mixture with a bigger content of asphalt, its density increases faster, that there is an optimal amount of fine aggregates for optimal compaction and that an appropriate amount of mineral powder will improve workability of mixtures, but overmuch mineral powder will make mixtures dry and hard. Conclusions based on SGC tests can provide basis for how to adjust material composition for improving compaction performance of asphalt mixtures, and for the designed asphalt mixture, its compaction performance can be predicted through these conclusions, which also contributes to the choice of compaction schemes. PMID:23818830

  19. SGC tests for influence of material composition on compaction characteristic of asphalt mixtures.

    PubMed

    Chen, Qun; Li, Yuzhi

    2013-01-01

    Compaction characteristic of the surface layer asphalt mixture (13-type gradation mixture) was studied using Superpave gyratory compactor (SGC) simulative compaction tests. Based on analysis of densification curve of gyratory compaction, influence rules of the contents of mineral aggregates of all sizes and asphalt on compaction characteristic of asphalt mixtures were obtained. SGC Tests show that, for the mixture with a bigger content of asphalt, its density increases faster, that there is an optimal amount of fine aggregates for optimal compaction and that an appropriate amount of mineral powder will improve workability of mixtures, but overmuch mineral powder will make mixtures dry and hard. Conclusions based on SGC tests can provide basis for how to adjust material composition for improving compaction performance of asphalt mixtures, and for the designed asphalt mixture, its compaction performance can be predicted through these conclusions, which also contributes to the choice of compaction schemes.

  20. Optimization of conditions for thermal smoothing GaAs surfaces

    NASA Astrophysics Data System (ADS)

    Akhundov, I. O.; Kazantsev, D. M.; Kozhuhov, A. S.; Alperovich, V. L.

    2018-03-01

    GaAs thermal smoothing by annealing in conditions which are close to equilibrium between the surface and vapors of As and Ga was earlier proved to be effective for the step-terraced surface formation on epi-ready substrates with a small root-mean-square roughness (Rq ≤ 0.15 nm). In the present study, this technique is further developed in order to reduce the annealing duration and to smooth GaAs samples with a larger initial roughness. To this end, we proposed a two-stage anneal with the first high-temperature stage aimed at smoothing "coarse" relief features and the second stage focused on "fine" smoothing at a lower temperature. The optimal temperatures and durations of two-stage annealing are found by Monte Carlo simulations and adjusted after experimentation. It is proved that the temperature and duration of the first high-temperature stage are restricted by the surface roughening, which occurs due to deviations from equilibrium conditions.

  1. Optimal trajectories for the aeroassisted flight experiment. Part 3: Formulation, results, and analysis

    NASA Technical Reports Server (NTRS)

    Miele, A.; Wang, T.; Lee, W. Y.; Zhao, Z. G.

    1989-01-01

    The determination of optimal trajectories for the aero-assisted flight experiment (AFE) is investigated. The intent of this experiment is to simulate a GEO-to-LEO transfer, where GEO denotes a geosynchronous Earth orbit and LEO denotes a low Earth orbit. The trajectories of an AFE spacecraft are analyzed in a 3D-space, employing the full system of 6 ODEs describing the atmospheric pass. The atmospheric entry conditions are given, and the atmospheric exit conditions are adjusted in such a way that the following conditions are satisfied: (1) the atmospheric velocity depletion is such that, after exiting, the AFE spacecraft first ascends to a specified apogee and then descends to a specified perigee; and (2) the exit orbital plane is identical with the entry orbital plane. The final maneuver, not analyzed here, includes the rendezvous with and the capture by the space shuttle.

  2. Study of the properties of new SPM detectors

    NASA Astrophysics Data System (ADS)

    Stewart, A. G.; Greene-O'Sullivan, E.; Herbert, D. J.; Saveliev, V.; Quinlan, F.; Wall, L.; Hughes, P. J.; Mathewson, A.; Jackson, J. C.

    2006-02-01

    The operation and performance of multi-pixel, Geiger-mode APD structures referred to as Silicon Photomultiplier (SPM) are reported. The SPM is a solid state device that has emerged over the last decade as a promising alternative to vacuum PMTs. This is due to their comparable performance in addition to their lower bias operation and power consumption, insensitivity to magnetic fields and ambient light, smaller size and ruggedness. Applications for these detectors are numerous and include life sciences, nuclear medicine, particle physics, microscopy and general instrumentation. With SPM devices, many geometrical and device parameters can be adjusted to optimize their performance for a particular application. In this paper, Monte Carlo simulations and experimental results for 1mm2 SPM structures are reported. In addition, trade-offs involved in optimizing the SPM in terms of the number and size of pixels for a given light intensity, and its affect on the dynamic range are discussed.

  3. Development of a prosthesis shoulder mechanism for upper limb amputees: application of an original design methodology to optimize functionality and wearability.

    PubMed

    Troncossi, Marco; Borghi, Corrado; Chiossi, Marco; Davalli, Angelo; Parenti-Castelli, Vincenzo

    2009-05-01

    The application of a design methodology for the determination of the optimal prosthesis architecture for a given upper limb amputee is presented in this paper along with the discussion of its results. In particular, a novel procedure was used to provide the main guidelines for the design of an actuated shoulder articulation for externally powered prostheses. The topology and the geometry of the new articulation were determined as the optimal compromise between wearability (for the ease of use and the patient's comfort) and functionality of the device (in terms of mobility, velocity, payload, etc.). This choice was based on kinematic and kinetostatic analyses of different upper limb prosthesis models and on purpose-built indices that were set up to evaluate the models from different viewpoints. Only 12 of the 31 simulated prostheses proved a sufficient level of functionality: among these, the optimal solution was an articulation having two actuated revolute joints with orthogonal axes for the elevation of the upper arm in any vertical plane and a frictional joint for the passive adjustment of the humeral intra-extra rotation. A prototype of the mechanism is at the clinical test stage.

  4. Optimal management of a stochastically varying population when policy adjustment is costly.

    PubMed

    Boettiger, Carl; Bode, Michael; Sanchirico, James N; Lariviere, Jacob; Hastings, Alan; Armsworth, Paul R

    2016-04-01

    Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management problem, how to manage a stochastically varying population using annually varying quotas in order to maximize profit, to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.

  5. Parent driver characteristics associated with sub-optimal restraint of child passengers.

    PubMed

    Winston, Flaura K; Chen, Irene G; Smith, Rebecca; Elliott, Michael R

    2006-12-01

    To identify parent driver demographic and socioeconomic characteristics associated with the use of sub-optimal restraints for child passengers under nine years. Cross-sectional study using in-depth, validated telephone interviews with parent drivers in a probability sample of 3,818 vehicle crashes involving 5,146 children. Sub-optimal restraint was defined as use of forward-facing child safety seats for infants under one or weighing under 20 lbs, and any seat-belt use for children under 9. Sub-optimal restraint was more common among children under one and between four and eight years than among children aged one to three years (18%, 65%, and 5%, respectively). For children under nine, independent risk factors for sub-optimal restraint were: non-Hispanic black parent drivers (with non-Hispanic white parents as reference, adjusted relative risk, adjusted RR = 1.24, 95% CI: 1.09-1.41); less educated parents (with college graduate or above as reference: high school, adjusted RR = 1.27, 95% CI: 1.12-1.44; less than high school graduate, adjusted RR = 1.36, 95% CI: 1.13-1.63); and lower family income (with $50,000 or more as reference: <$20,000, adjusted RR = 1.23, 95% CI: 1.07-1.40). Multivariate analysis revealed the following independent risk factors for sub-optimal restraint among four-to-eight-year-olds: older parent age, limited education, black race, and income below $20,000. Parents with low educational levels or of non-Hispanic black background may require additional anticipatory guidance regarding child passenger safety. The importance of poverty in predicting sub-optimal restraint underscores the importance of child restraint and booster seat disbursement and education programs, potentially through Medicaid.

  6. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  7. Using optimal interpolation to assimilate surface measurements and satellite AOD for ozone and PM2.5: A case study for July 2011.

    PubMed

    Tang, Youhua; Chai, Tianfeng; Pan, Li; Lee, Pius; Tong, Daniel; Kim, Hyun-Cheol; Chen, Weiwei

    2015-10-01

    We employed an optimal interpolation (OI) method to assimilate AIRNow ozone/PM2.5 and MODIS (Moderate Resolution Imaging Spectroradiometer) aerosol optical depth (AOD) data into the Community Multi-scale Air Quality (CMAQ) model to improve the ozone and total aerosol concentration for the CMAQ simulation over the contiguous United States (CONUS). AIRNow data assimilation was applied to the boundary layer, and MODIS AOD data were used to adjust total column aerosol. Four OI cases were designed to examine the effects of uncertainty setting and assimilation time; two of these cases used uncertainties that varied in time and location, or "dynamic uncertainties." More frequent assimilation and higher model uncertainties pushed the modeled results closer to the observation. Our comparison over a 24-hr period showed that ozone and PM2.5 mean biases could be reduced from 2.54 ppbV to 1.06 ppbV and from -7.14 µg/m³ to -0.11 µg/m³, respectively, over CONUS, while their correlations were also improved. Comparison to DISCOVER-AQ 2011 aircraft measurement showed that surface ozone assimilation applied to the CMAQ simulation improves regional low-altitude (below 2 km) ozone simulation. This paper described an application of using optimal interpolation method to improve the model's ozone and PM2.5 estimation using surface measurement and satellite AOD. It highlights the usage of the operational AIRNow data set, which is available in near real time, and the MODIS AOD. With a similar method, we can also use other satellite products, such as the latest VIIRS products, to improve PM2.5 prediction.

  8. Subjective Invulnerability, Optimism Bias and Adjustment in Emerging Adulthood

    ERIC Educational Resources Information Center

    Lapsley, Daniel K.; Hill, Patrick L.

    2010-01-01

    The relationship between subjective invulnerability and optimism bias in risk appraisal, and their comparative association with indices of risk activity, substance use and college adjustment problems was assessed in a sample of 350 (M [subscript age] = 20.17; 73% female; 93% White/European American) emerging adults. Subjective invulnerability was…

  9. The combination of simulation and response methodology and its application in an aggregate production plan

    NASA Astrophysics Data System (ADS)

    Chen, Zhiming; Feng, Yuncheng

    1988-08-01

    This paper describes an algorithmic structure for combining simulation and optimization techniques both in theory and practice. Response surface methodology is used to optimize the decision variables in the simulation environment. A simulation-optimization software has been developed and successfully implemented, and its application to an aggregate production planning simulation-optimization model is reported. The model's objective is to minimize the production cost and to generate an optimal production plan and inventory control strategy for an aircraft factory.

  10. Assessing risk-adjustment approaches under non-random selection.

    PubMed

    Luft, Harold S; Dudley, R Adams

    2004-01-01

    Various approaches have been proposed to adjust for differences in enrollee risk in health plans. Because risk-selection strategies may have different effects on enrollment, we simulated three types of selection--dumping, skimming, and stinting. Concurrent diagnosis-based risk adjustment, and a hybrid using concurrent adjustment for about 8% of the cases and prospective adjustment for the rest, perform markedly better than prospective or demographic adjustments, both in terms of R2 and the extent to which plans experience unwarranted gains or losses. The simulation approach offers a valuable tool for analysts in assessing various risk-adjustment strategies under different selection situations.

  11. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  12. Curvature sensor for ocular wavefront measurement.

    PubMed

    Díaz-Doutón, Fernando; Pujol, Jaume; Arjona, Montserrat; Luque, Sergio O

    2006-08-01

    We describe a new wavefront sensor for ocular aberration determination, based on the curvature sensing principle, which adapts the classical system used in astronomy for the living eye's measurements. The actual experimental setup is presented and designed following a process guided by computer simulations to adjust the design parameters for optimal performance. We present results for artificial and real young eyes, compared with the Hartmann-Shack estimations. Both methods show a similar performance for these cases. This system will allow for the measurement of higher order aberrations than the currently used wavefront sensors in situations in which they are supposed to be significant, such as postsurgery eyes.

  13. Response analysis of holography-based modal wavefront sensor.

    PubMed

    Dong, Shihao; Haist, Tobias; Osten, Wolfgang; Ruppel, Thomas; Sawodny, Oliver

    2012-03-20

    The crosstalk problem of holography-based modal wavefront sensing (HMWS) becomes more severe with increasing aberration. In this paper, crosstalk effects on the sensor response are analyzed statistically for typical aberrations due to atmospheric turbulence. For specific turbulence strength, we optimized the sensor by adjusting the detector radius and the encoded phase bias for each Zernike mode. Calibrated response curves of low-order Zernike modes were further utilized to improve the sensor accuracy. The simulation results validated our strategy. The number of iterations for obtaining a residual RMS wavefront error of 0.1λ is reduced from 18 to 3. © 2012 Optical Society of America

  14. The power grid AGC frequency bias coefficient online identification method based on wide area information

    NASA Astrophysics Data System (ADS)

    Wang, Zian; Li, Shiguang; Yu, Ting

    2015-12-01

    This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.

  15. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  16. Applying operations research to optimize a novel population management system for cancer screening

    PubMed Central

    Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J

    2014-01-01

    Objective To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. Materials and methods TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. Results TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Conclusions Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management. PMID:24043318

  17. Optimal Predictive Control for Path Following of a Full Drive-by-Wire Vehicle at Varying Speeds

    NASA Astrophysics Data System (ADS)

    SONG, Pan; GAO, Bolin; XIE, Shugang; FANG, Rui

    2017-05-01

    The current research of the global chassis control problem for the full drive-by-wire vehicle focuses on the control allocation (CA) of the four-wheel-distributed traction/braking/steering systems. However, the path following performance and the handling stability of the vehicle can be enhanced a step further by automatically adjusting the vehicle speed to the optimal value. The optimal solution for the combined longitudinal and lateral motion control (MC) problem is given. First, a new variable step-size spatial transformation method is proposed and utilized in the prediction model to derive the dynamics of the vehicle with respect to the road, such that the tracking errors can be explicitly obtained over the prediction horizon at varying speeds. Second, a nonlinear model predictive control (NMPC) algorithm is introduced to handle the nonlinear coupling between any two directions of the vehicular planar motion and computes the sequence of the optimal motion states for following the desired path. Third, a hierarchical control structure is proposed to separate the motion controller into a NMPC based path planner and a terminal sliding mode control (TSMC) based path follower. As revealed through off-line simulations, the hierarchical methodology brings nearly 1700% improvement in computational efficiency without loss of control performance. Finally, the control algorithm is verified through a hardware in-the-loop simulation system. Double-lane-change (DLC) test results show that by using the optimal predictive controller, the root-mean-square (RMS) values of the lateral deviations and the orientation errors can be reduced by 41% and 30%, respectively, comparing to those by the optimal preview acceleration (OPA) driver model with the non-preview speed-tracking method. Additionally, the average vehicle speed is increased by 0.26 km/h with the peak sideslip angle suppressed to 1.9°. This research proposes a novel motion controller, which provides the full drive-by-wire vehicle with better lane-keeping and collision-avoidance capabilities during autonomous driving.

  18. Optimal Congestion Management in Electricity Market Using Particle Swarm Optimization with Time Varying Acceleration Coefficients

    NASA Astrophysics Data System (ADS)

    Boonyaritdachochai, Panida; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes an optimal power redispatching approach for congestion management in deregulated electricity market. Generator sensitivity is considered to indicate the redispatched generators. It can reduce the number of participating generators. The power adjustment cost and total redispatched power are minimized by particle swarm optimization with time varying acceleration coefficients (PSO-TVAC). The IEEE 30-bus and IEEE 118-bus systems are used to illustrate the proposed approach. Test results show that the proposed optimization scheme provides the lowest adjustment cost and redispatched power compared to the other schemes. The proposed approach is useful for the system operator to manage the transmission congestion.

  19. A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control.

    PubMed

    Han, Min; Fan, Jianchao; Wang, Jun

    2011-09-01

    A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.

  20. Huygens probe entry, descent, and landing trajectory reconstruction using the Program to Optimize Simulated Trajectories II

    NASA Astrophysics Data System (ADS)

    Striepe, Scott Allen

    The objectives of this research were to develop a reconstruction capability using the Program to Optimize Simulated Trajectories II (POST2), apply this capability to reconstruct the Huygens Titan probe entry, descent, and landing (EDL) trajectory, evaluate the newly developed POST2 reconstruction module, analyze the reconstructed trajectory, and assess the pre-flight simulation models used for Huygens EDL simulation. An extended Kalman filter (EKF) module was developed and integrated into POST2 to enable trajectory reconstruction (especially when using POST2-based mission specific simulations). Several validation cases, ranging from a single, constant parameter estimate to multivariable estimation cases similar to an actual mission flight, were executed to test the POST2 reconstruction module. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using accelerometer measurements taken during flight to adjust an estimated state (e.g., position, velocity, parachute drag, wind velocity, etc.) in a POST2-based simulation developed to support EDL analyses and design prior to entry. Although the main emphasis of the trajectory reconstruction was to evaluate models used in the NASA pre-entry trajectory simulation, the resulting reconstructed trajectory was also assessed to provide an independent evaluation of the ESA result. Major findings from this analysis include: Altitude profiles from this analysis agree well with other NASA and ESA results but not with Radar data, whereas a scale factor of about 0.93 would bring the radar measurements into compliance with these results; entry capsule aerodynamics predictions (axial component only) were well within 3-sigma bounds established pre-flight for most of the entry when compared to reconstructed values; Main parachute drag of 9% to 19% above ESA model was determined from the reconstructed trajectory; based on the tilt sensor and accelerometer data, the conclusion from this assessment was that the probe was tilted about 10 degrees during the Drogue parachute phase.

  1. Design of high-speed burst mode clock and data recovery IC for passive optical network

    NASA Astrophysics Data System (ADS)

    Yan, Minhui; Hong, Xiaobin; Huang, Wei-Ping; Hong, Jin

    2005-09-01

    Design of a high bit rate burst mode clock and data recovery (BMCDR) circuit for gigabit passive optical networks (GPON) is described. A top-down design flow is established and some of the key issues related to the behavioural level modeling are addressed in consideration for the complexity of the BMCDR integrated circuit (IC). Precise implementation of Simulink behavioural model accounting for the saturation of frequency control voltage is therefore developed for the BMCDR, and the parameters of the circuit blocks can be readily adjusted and optimized based on the behavioural model. The newly designed BMCDR utilizes the 0.18um standard CMOS technology and is shown to be capable of operating at bit rate of 2.5Gbps, as well as the recovery time of one bit period in our simulation. The developed behaviour model is verified by comparing with the detailed circuit simulation.

  2. Neural integration underlying a time-compensated sun compass in the migratory monarch butterfly

    PubMed Central

    Shlizerman, Eli; Phillips-Portillo, James; Reppert, Steven M.

    2016-01-01

    Migrating Eastern North American monarch butterflies use a time-compensated sun compass to adjust their flight to the southwest direction. While the antennal genetic circadian clock and the azimuth of the sun are instrumental for proper function of the compass, it is unclear how these signals are represented on a neuronal level and how they are integrated to produce flight control. To address these questions, we constructed a receptive field model of the compound eye that encodes the solar azimuth. We then derived a neural circuit model, which integrates azimuthal and circadian signals to correct flight direction. The model demonstrates an integration mechanism, which produces robust trajectories reaching the southwest regardless of the time of day and includes a configuration for remigration. Comparison of model simulations with flight trajectories of butterflies in a flight simulator shows analogous behaviors and affirms the prediction that midday is the optimal time for migratory flight. PMID:27149852

  3. Nonlinear dynamics of autonomous vehicles with limits on acceleration

    NASA Astrophysics Data System (ADS)

    Davis, L. C.

    2014-07-01

    The stability of autonomous vehicle platoons with limits on acceleration and deceleration is determined. If the leading-vehicle acceleration remains within the limits, all vehicles in the platoon remain within the limits when the relative-velocity feedback coefficient is equal to the headway time constant [k=1/h]. Furthermore, if the sensitivity α>1/h, no collisions occur. String stability for small perturbations is assumed and the initial condition is taken as the equilibrium state. Other values of k and α that give stability with no collisions are found from simulations. For vehicles with non-negligible mechanical response, simulations indicate that the acceleration-feedback-control gain might have to be dynamically adjusted to obtain optimal performance as the response time changes with engine speed. Stability is demonstrated for some perturbations that cause initial acceleration or deceleration greater than the limits, yet do not cause collisions.

  4. Design of a force reflecting hand controller for space telemanipulation studies

    NASA Technical Reports Server (NTRS)

    Paines, J. D. B.

    1987-01-01

    The potential importance of space telemanipulator systems is reviewed, along with past studies of master-slave manipulation using a generalized force reflecting master arm. Problems concerning their dynamic interaction with the human operator have been revealed in the use of these systems, with marked differences between 1-g and simulated weightless conditions. A study is outlined to investigate the optimization of the man machine dynamics of master-slave manipulation, and a set of specifications is determined for the apparatus necessary to perform this investigation. This apparatus is a one degree of freedom force reflecting hand controller with closed loop servo control which enables it to simulate arbitrary dynamic properties to high bandwidth. Design of the complete system and its performance is discussed. Finally, the experimental adjustment of the hand controller dynamics for smooth manual control performance with good operator force perception is described, resulting in low inertia, viscously damped hand controller dynamics.

  5. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.

    PubMed

    Austin, Peter C

    2017-02-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.

  6. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching

    PubMed Central

    2016-01-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071

  7. Methods and devices for optimizing the operation of a semiconductor optical modulator

    DOEpatents

    Zortman, William A.

    2015-07-14

    A semiconductor-based optical modulator includes a control loop to control and optimize the modulator's operation for relatively high data rates (above 1 GHz) and/or relatively high voltage levels. Both the amplitude of the modulator's driving voltage and the bias of the driving voltage may be adjusted using the control loop. Such adjustments help to optimize the operation of the modulator by reducing the number of errors present in a modulated data stream.

  8. A design of calibration single star simulator with adjustable magnitude and optical spectrum output system

    NASA Astrophysics Data System (ADS)

    Hu, Guansheng; Zhang, Tao; Zhang, Xuan; Shi, Gentai; Bai, Haojie

    2018-03-01

    In order to achieve multi-color temperature and multi-magnitude output, magnitude and temperature can real-time adjust, a new type of calibration single star simulator was designed with adjustable magnitude and optical spectrum output in this article. xenon lamp and halogen tungsten lamp were used as light source. The control of spectrum band and temperature of star was realized with different multi-beam narrow band spectrum with light of varying intensity. When light source with different spectral characteristics and color temperature go into the magnitude regulator, the light energy attenuation were under control by adjusting the light luminosity. This method can completely satisfy the requirements of calibration single star simulator with adjustable magnitude and optical spectrum output in order to achieve the adjustable purpose of magnitude and spectrum.

  9. Tidal Turbine Array Optimization Based on the Discrete Particle Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Guo-wei; Wu, He; Wang, Xiao-yong; Zhou, Qing-wei; Liu, Xiao-man

    2018-06-01

    In consideration of the resource wasted by unreasonable layout scheme of tidal current turbines, which would influence the ratio of cost and power output, particle swarm optimization algorithm is introduced and improved in the paper. In order to solve the problem of optimal array of tidal turbines, the discrete particle swarm optimization (DPSO) algorithm has been performed by re-defining the updating strategies of particles' velocity and position. This paper analyzes the optimization problem of micrositing of tidal current turbines by adjusting each turbine's position, where the maximum value of total electric power is obtained at the maximum speed in the flood tide and ebb tide. Firstly, the best installed turbine number is generated by maximizing the output energy in the given tidal farm by the Farm/Flux and empirical method. Secondly, considering the wake effect, the reasonable distance between turbines, and the tidal velocities influencing factors in the tidal farm, Jensen wake model and elliptic distribution model are selected for the turbines' total generating capacity calculation at the maximum speed in the flood tide and ebb tide. Finally, the total generating capacity, regarded as objective function, is calculated in the final simulation, thus the DPSO could guide the individuals to the feasible area and optimal position. The results have been concluded that the optimization algorithm, which increased 6.19% more recourse output than experience method, can be thought as a good tool for engineering design of tidal energy demonstration.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, J.; Schwender, J.

    Computational simulation of large-scale biochemical networks can be used to analyze and predict the metabolic behavior of an organism, such as a developing seed. Based on the biochemical literature, pathways databases and decision rules defining reaction directionality we reconstructed bna572, a stoichiometric metabolic network model representing Brassica napus seed storage metabolism. In the highly compartmentalized network about 25% of the 572 reactions are transport reactions interconnecting nine subcellular compartments and the environment. According to known physiological capabilities of developing B. napus embryos, four nutritional conditions were defined to simulate heterotrophy or photoheterotrophy, each in combination with the availability of inorganicmore » nitrogen (ammonia, nitrate) or amino acids as nitrogen sources. Based on mathematical linear optimization the optimal solution space was comprehensively explored by flux variability analysis, thereby identifying for each reaction the range of flux values allowable under optimality. The range and variability of flux values was then categorized into flux variability types. Across the four nutritional conditions, approximately 13% of the reactions have variable flux values and 10-11% are substitutable (can be inactive), both indicating metabolic redundancy given, for example, by isoenzymes, subcellular compartmentalization or the presence of alternative pathways. About one-third of the reactions are never used and are associated with pathways that are suboptimal for storage synthesis. Fifty-seven reactions change flux variability type among the different nutritional conditions, indicating their function in metabolic adjustments. This predictive modeling framework allows analysis and quantitative exploration of storage metabolism of a developing B. napus oilseed.« less

  11. Multiple wavelength spectral system simulating background light noise environment in satellite laser communications

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Sun, Jianfeng; Hou, Peipei; Xu, Qian; Xi, Yueli; Zhou, Yu; Zhu, Funan; Liu, Liren

    2017-08-01

    Performance of satellite laser communications between GEO and LEO satellites can be influenced by background light noise appeared in the field of view due to sunlight or planets and some comets. Such influences should be studied on the ground testing platform before the space application. In this paper, we introduce a simulator that can simulate the real case of background light noise in space environment during the data talking via laser beam between two lonely satellites. This simulator can not only simulate the effect of multi-wavelength spectrum, but also the effects of adjustable angles of field-of-view, large range of adjustable optical power and adjustable deflection speeds of light noise in space environment. We integrate these functions into a device with small and compact size for easily mobile use. Software control function is also achieved via personal computer to adjust these functions arbitrarily. Keywords:

  12. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  13. Node Redeployment Algorithm Based on Stratified Connected Tree for Underwater Sensor Networks

    PubMed Central

    Liu, Jun; Jiang, Peng; Wu, Feng; Yu, Shanen; Song, Chunyue

    2016-01-01

    During the underwater sensor networks (UWSNs) operation, node drift with water environment causes network topology changes. Periodic node location examination and adjustment are needed to maintain good network monitoring quality as long as possible. In this paper, a node redeployment algorithm based on stratified connected tree for UWSNs is proposed. At every network adjustment moment, self-examination and adjustment on node locations are performed firstly. If a node is outside the monitored space, it returns to the last location recorded in its memory along straight line. Later, the network topology is stratified into a connected tree that takes the sink node as the root node by broadcasting ready information level by level, which can improve the network connectivity rate. Finally, with synthetically considering network coverage and connectivity rates, and node movement distance, the sink node performs centralized optimization on locations of leaf nodes in the stratified connected tree. Simulation results show that the proposed redeployment algorithm can not only keep the number of nodes in the monitored space as much as possible and maintain good network coverage and connectivity rates during network operation, but also reduce node movement distance during node redeployment and prolong the network lifetime. PMID:28029124

  14. Dosing algorithm to target a predefined AUC in patients with primary central nervous system lymphoma receiving high dose methotrexate.

    PubMed

    Joerger, Markus; Ferreri, Andrés J M; Krähenbühl, Stephan; Schellens, Jan H M; Cerny, Thomas; Zucca, Emanuele; Huitema, Alwin D R

    2012-02-01

    There is no consensus regarding optimal dosing of high dose methotrexate (HDMTX) in patients with primary CNS lymphoma. Our aim was to develop a convenient dosing algorithm to target AUC(MTX) in the range between 1000 and 1100 µmol l(-1) h. A population covariate model from a pooled dataset of 131 patients receiving HDMTX was used to simulate concentration-time curves of 10,000 patients and test the efficacy of a dosing algorithm based on 24 h MTX plasma concentrations to target the prespecified AUC(MTX) . These data simulations included interindividual, interoccasion and residual unidentified variability. Patients received a total of four simulated cycles of HDMTX and adjusted MTX dosages were given for cycles two to four. The dosing algorithm proposes MTX dose adaptations ranging from +75% in patients with MTX C(24) < 0.5 µmol l(-1) up to -35% in patients with MTX C(24) > 12 µmol l(-1). The proposed dosing algorithm resulted in a marked improvement of the proportion of patients within the AUC(MTX) target between 1000 and 1100 µmol l(-1) h (11% with standard MTX dose, 35% with the adjusted dose) and a marked reduction of the interindividual variability of MTX exposure. A simple and practical dosing algorithm for HDMTX has been developed based on MTX 24 h plasma concentrations, and its potential efficacy in improving the proportion of patients within a prespecified target AUC(MTX) and reducing the interindividual variability of MTX exposure has been shown by data simulations. The clinical benefit of this dosing algorithm should be assessed in patients with primary central nervous system lymphoma (PCNSL). © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  15. Optimization of the K-edge imaging for vulnerable plaques using gold nanoparticles and energy-resolved photon counting detectors: a simulation study

    PubMed Central

    Alivov, Yahya; Baturin, Pavlo; Le, Huy Q.; Ducote, Justin; Molloi, Sabee

    2014-01-01

    We investigated the effect of different imaging parameters such as dose, beam energy, energy resolution, and number of energy bins on image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. Maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of plaque's inflammation. The simulations studies used a single slice parallel beam CT geometry with an X-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33x24 cm2) phantom, where both phantoms contained tissue, calcium, and gold. In the simulation studies GNP quantification and background (calcium and tissue) suppression task were pursued. The X-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% FWHM energy resolution) implementations of photon counting detector were simulated. The simulations were performed for the CdZnTe detector with pixel pitch of 0.5-1 mm, which corresponds to the performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the X-ray beam energy (kVp) to achieve the highest signal-to-noise ratio (SNR) with respect to patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at 125 kVp X-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 μmol/mL (0.21 mg/mL) for an ideal detector and about 2.5 μmol/mL (0.49 mg/mL) for more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque. PMID:24334301

  16. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  17. A hybrid approach to modeling and control of vehicle height for electronically controlled air suspension

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long

    2016-01-01

    The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.

  18. USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation

    DTIC Science & Technology

    2016-09-01

    release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However

  19. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. Development of a Groundwater Transport Simulation Tool for Remedial Process Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivarson, Kristine A.; Hanson, James P.; Tonkin, M.

    2015-01-14

    The groundwater remedy for hexavalent chromium at the Hanford Site includes operation of five large pump-and-treat systems along the Columbia River. The systems at the 100-HR-3 and 100-KR-4 groundwater operable units treat a total of about 9,840 liters per minute (2,600 gallons per minute) of groundwater to remove hexavalent chromium, and cover an area of nearly 26 square kilometers (10 square miles). The pump-and-treat systems result in large scale manipulation of groundwater flow direction, velocities, and most importantly, the contaminant plumes. Tracking of the plumes and predicting needed system modifications is part of the remedial process optimization, and is amore » continual process with the goal of reducing costs and shortening the timeframe to achieve the cleanup goals. While most of the initial system evaluations are conducted by assessing performance (e.g., reduction in contaminant concentration in groundwater and changes in inferred plume size), changes to the well field are often recommended. To determine the placement for new wells, well realignments, and modifications to pumping rates, it is important to be able to predict resultant plume changes. In smaller systems, it may be effective to make small scale changes periodically and adjust modifications based on groundwater monitoring results. Due to the expansive nature of the remediation systems at Hanford, however, additional tools were needed to predict the plume reactions to system changes. A computer simulation tool was developed to support pumping rate recommendations for optimization of large pump-and-treat groundwater remedy systems. This tool, called the Pumping Optimization Model, or POM, is based on a 1-layer derivation of a multi-layer contaminant transport model using MODFLOW and MT3D.« less

  1. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  2. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  3. Adiabatic two-qubit state preparation in a superconducting qubit system

    NASA Astrophysics Data System (ADS)

    Filipp, Stefan; Ganzhorn, Marc; Egger, Daniel; Fuhrer, Andreas; Moll, Nikolaj; Mueller, Peter; Roth, Marco; Schmidt, Sebastian

    The adiabatic transport of a quantum system from an initial eigenstate to its final state while remaining in the instantaneous eigenstate of the driving Hamiltonian can be used for robust state preparation. With control over both qubit frequencies and qubit-qubit couplings this method can be used to drive the system from initially trivial eigenstates of the uncoupled qubits to complex entangled multi-qubit states. In the context of quantum simulation, the final state may encode a non-trivial ground-state of a complex molecule or, in the context of adiabatic quantum computing, the solution to an optimization problem. Here, we present experimental results on a system comprising fixed-frequency superconducting transmon qubits and a tunable coupler to adjust the qubit-qubit coupling via parametric frequency modulation. We realize different types of interaction by adjusting the frequency of the modulation. A slow variation of drive amplitude and phase leads to an adiabatic steering of the system to its final state showing entanglement between the qubits.

  4. A Novel Adjustable Concept for Permeable Gas/Vapor Protective Clothing: Balancing Protection and Thermal Strain.

    PubMed

    Bogerd, Cornelis Peter; Langenberg, Johannes Pieter; DenHartog, Emiel A

    2018-02-13

    Armed forces typically have personal protective clothing (PPC) in place to offer protection against chemical, biological, radiological and nuclear (CBRN) agents. The regular soldier is equipped with permeable CBRN-PPC. However, depending on the operational task, these PPCs pose too much thermal strain to the wearer, which results in a higher risk of uncompensable heat stress. This study investigates the possibilities of adjustable CBRN-PPC, consisting of different layers that can be worn separately or in combination with each other. This novel concept aims to achieve optimization between protection and thermal strain during operations. Two CBRN-PPC (protective) layers were obtained from two separate manufacturers: (i) a next-to-skin (NTS) and (ii) a low-burden battle dress uniform (protective BDU). In addition to these layers, a standard (non-CBRN protective) BDU (sBDU) was also made available. The effect of combining clothing layers on the levels of protection were investigated with a Man-In-Simulant Test. Finally, a mechanistic numerical model was employed to give insight into the thermal burden of the evaluated CBRN-PPC concepts. Combining layers results in substantially higher protection that is more than the sum of the individual layers. Reducing the airflow on the protective layer closest to the skin seems to play an important role in this, since combining the NTS with the sBDU also resulted in substantially higher protection. As expected, the thermal strain posed by the different clothing layer combinations decreases as the level of protection decreases. This study has shown that the concept of adjustable protection and thermal strain through multiple layers of CBRN-PPC works. Adjustable CBRN-PPC allows for optimization of the CBRN-PPC in relation to the threat level, thermal environment, and tasks at hand in an operational setting. © The Author(s) 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  5. Simulation and optimization of a 10 A electron gun with electrostatic compression for the electron beam ion source.

    PubMed

    Pikin, A; Beebe, E N; Raparia, D

    2013-03-01

    Increasing the current density of the electron beam in the ion trap of the Electron Beam Ion Source (EBIS) in BNL's Relativistic Heavy Ion Collider facility would confer several essential benefits. They include increasing the ions' charge states, and therefore, the ions' energy out of the Booster for NASA applications, reducing the influx of residual ions in the ion trap, lowering the average power load on the electron collector, and possibly also reducing the emittance of the extracted ion beam. Here, we discuss our findings from a computer simulation of an electron gun with electrostatic compression for electron current up to 10 A that can deliver a high-current-density electron beam for EBIS. The magnetic field in the cathode-anode gap is formed with a magnetic shield surrounding the gun electrodes and the residual magnetic field on the cathode is (5 ÷ 6) Gs. It was demonstrated that for optimized gun geometry within the electron beam current range of (0.5 ÷ 10) A the amplitude of radial beam oscillations can be maintained close to 4% of the beam radius by adjusting the injection magnetic field generated by a separate magnetic coil. Simulating the performance of the gun by varying geometrical parameters indicated that the original gun model is close to optimum and the requirements to the precision of positioning the gun elements can be easily met with conventional technology.

  6. Simulation and optimization of a 10 A electron gun with electrostatic compression for the electron beam ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pikin, A.; Beebe, E. N.; Raparia, D.

    Increasing the current density of the electron beam in the ion trap of the Electron Beam Ion Source (EBIS) in BNL's Relativistic Heavy Ion Collider facility would confer several essential benefits. They include increasing the ions' charge states, and therefore, the ions' energy out of the Booster for NASA applications, reducing the influx of residual ions in the ion trap, lowering the average power load on the electron collector, and possibly also reducing the emittance of the extracted ion beam. Here, we discuss our findings from a computer simulation of an electron gun with electrostatic compression for electron current upmore » to 10 A that can deliver a high-current-density electron beam for EBIS. The magnetic field in the cathode-anode gap is formed with a magnetic shield surrounding the gun electrodes and the residual magnetic field on the cathode is (5 Division-Sign 6) Gs. It was demonstrated that for optimized gun geometry within the electron beam current range of (0.5 Division-Sign 10) A the amplitude of radial beam oscillations can be maintained close to 4% of the beam radius by adjusting the injection magnetic field generated by a separate magnetic coil. Simulating the performance of the gun by varying geometrical parameters indicated that the original gun model is close to optimum and the requirements to the precision of positioning the gun elements can be easily met with conventional technology.« less

  7. A Continuum Poisson-Boltzmann Model for Membrane Channel Proteins

    PubMed Central

    Xiao, Li; Diao, Jianxiong; Greene, D'Artagnan; Wang, Junmei; Luo, Ray

    2017-01-01

    Membrane proteins constitute a large portion of the human proteome and perform a variety of important functions as membrane receptors, transport proteins, enzymes, signaling proteins, and more. Computational studies of membrane proteins are usually much more complicated than those of globular proteins. Here we propose a new continuum model for Poisson-Boltzmann calculations of membrane channel proteins. Major improvements over the existing continuum slab model are as follows:1) The location and thickness of the slab model are fine-tuned based on explicit-solvent MD simulations. 2) The highly different accessibility in the membrane and water regions are addressed with a two-step, two-probe grid labeling procedure, and 3) The water pores/channels are automatically identified. The new continuum membrane model is optimized (by adjusting the membrane probe, as well as the slab thickness and center) to best reproduce the distributions of buried water molecules in the membrane region as sampled in explicit water simulations. Our optimization also shows that the widely adopted water probe of 1.4 Å for globular proteins is a very reasonable default value for membrane protein simulations. It gives the best compromise in reproducing the explicit water distributions in membrane channel proteins, at least in the water accessible pore/channel regions that we focus on. Finally, we validate the new membrane model by carrying out binding affinity calculations for a potassium channel, and we observe a good agreement with experiment results. PMID:28564540

  8. Subject-specific left ventricular dysfunction modeling using composite material mechanics approach

    NASA Astrophysics Data System (ADS)

    Haddad, Seyed Mohammad Hassan; Karami, Elham; Samani, Abbas

    2017-03-01

    Diverse cardiac conditions such as myocardial infarction and hypertension can lead to diastolic dysfunction as a prevalent cardiac condition. Diastolic dysfunctions can be diagnosed through different adverse mechanisms such as abnormal left ventricle (LV) relaxation, filling, and diastolic stiffness. This paper is geared towards evaluating diastolic stiffness and measuring the LV blood pressure non-invasively. Diastolic stiffness is an important parameter which can be exploited for more accurate diagnosis of diastolic dysfunction. For this purpose, a finite element (FE) LV mechanical model, which works based on a novel composite material model of the cardiac tissue, was utilized. Here, this model was tested for inversion-based applications where it was applied for estimating the cardiac tissue passive stiffness mechanical properties as well as diastolic LV blood pressure. To this end, the model was applied to simulate diastolic inflation of the human LV. The start-diastolic LV geometry was obtained from MR image data segmentation of a healthy human volunteer. The obtained LV geometry was discretized into a FE mesh before FE simulation was conducted. The LV tissue stiffness and diastolic LV blood pressure were adjusted through optimization to achieve the best match between the calculated LV geometry and the one obtained from imaging data. The performance of the LV mechanical simulations using the optimal values of tissue stiffness and blood pressure was validated by comparing the geometrical parameters of the dilated LV model as well as the stress and strain distributions through the LV model with available measurements reported on the LV dilation.

  9. Automatic efficiency optimization of an axial compressor with adjustable inlet guide vanes

    NASA Astrophysics Data System (ADS)

    Li, Jichao; Lin, Feng; Nie, Chaoqun; Chen, Jingyi

    2012-04-01

    The inlet attack angle of rotor blade reasonably can be adjusted with the change of the stagger angle of inlet guide vane (IGV); so the efficiency of each condition will be affected. For the purpose to improve the efficiency, the DSP (Digital Signal Processor) controller is designed to adjust the stagger angle of IGV automatically in order to optimize the efficiency at any operating condition. The A/D signal collection includes inlet static pressure, outlet static pressure, outlet total pressure, rotor speed and torque signal, the efficiency can be calculated in the DSP, and the angle signal for the stepping motor which control the IGV will be sent out from the D/A. Experimental investigations are performed in a three-stage, low-speed axial compressor with variable inlet guide vanes. It is demonstrated that the DSP designed can well adjust the stagger angle of IGV online, the efficiency under different conditions can be optimized. This establishment of DSP online adjustment scheme may provide a practical solution for improving performance of multi-stage axial flow compressor when its operating condition is varied.

  10. A comparative study on stress and compliance based structural topology optimization

    NASA Astrophysics Data System (ADS)

    Hailu Shimels, G.; Dereje Engida, W.; Fakhruldin Mohd, H.

    2017-10-01

    Most of structural topology optimization problems have been formulated and solved to either minimize compliance or weight of a structure under volume or stress constraints, respectively. Even if, a lot of researches are conducted on these two formulation techniques separately, there is no clear comparative study between the two approaches. This paper intends to compare these formulation techniques, so that an end user or designer can choose the best one based on the problems they have. Benchmark problems under the same boundary and loading conditions are defined, solved and results are compared based on these formulations. Simulation results shows that the two formulation techniques are dependent on the type of loading and boundary conditions defined. Maximum stress induced in the design domain is higher when the design domains are formulated using compliance based formulations. Optimal layouts from compliance minimization formulation has complex layout than stress based ones which may lead the manufacturing of the optimal layouts to be challenging. Optimal layouts from compliance based formulations are dependent on the material to be distributed. On the other hand, optimal layouts from stress based formulation are dependent on the type of material used to define the design domain. High computational time for stress based topology optimization is still a challenge because of the definition of stress constraints at element level. Results also shows that adjustment of convergence criterions can be an alternative solution to minimize the maximum stress developed in optimal layouts. Therefore, a designer or end user should choose a method of formulation based on the design domain defined and boundary conditions considered.

  11. Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body

    NASA Astrophysics Data System (ADS)

    Wang, Xijing; Li, Jisheng

    With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.

  12. Optimization design of submerged propeller in oxidation ditch by computational fluid dynamics and comparison with experiments.

    PubMed

    Zhang, Yuquan; Zheng, Yuan; Fernandez-Rodriguez, E; Yang, Chunxia; Zhu, Yantao; Liu, Huiwen; Jiang, Hao

    The operating condition of a submerged propeller has a significant impact on flow field and energy consumption of the oxidation ditch. An experimentally validated numerical model, based on the computational fluid dynamics (CFD) tool, is presented to optimize the operating condition by considering two important factors: flow field and energy consumption. Performance demonstration and comparison of different operating conditions were carried out in a Carrousel oxidation ditch at the Yingtang wastewater treatment plants in Anhui Province, China. By adjusting the position and rotating speed together with the number of submerged propellers, problems of sludge deposit and the low velocity in the bend could be solved in a most cost-effective way. The simulated results were acceptable compared with the experimental data and the following results were obtained. The CFD model characterized flow pattern and energy consumption in the full-scale oxidation ditch. The predicted flow field values were within -1.28 ± 7.14% difference from the measured values. By determining three sets of propellers under the rotating speed of 6.50 rad/s with one located 5 m from the first curved wall, after numerical simulation and actual measurement, not only the least power density but also the requirement of the flow pattern could be realized.

  13. Feasibility study of patient-specific surgical templates for the fixation of pedicle screws.

    PubMed

    Salako, F; Aubin, C-E; Fortin, C; Labelle, H

    2002-01-01

    Surgery for scoliosis, as well as other posterior spinal surgeries, frequently uses pedicle screws to fix an instrumentation on the spine. Misplacement of a screw can lead to intra- and post-operative complications. The objective of this study is to design patient-specific surgical templates to guide the drilling operation. From the CT-scan of a vertebra, the optimal drilling direction and limit angles are computed from an inverse projection of the pedicle limits. The first template design uses a surface-to-surface registration method and was constructed in a CAD system by subtracting the vertebra from a rectangular prism and a cylinder with the optimal orientation. This template and the vertebra were built using rapid prototyping. The second design uses a point-to-surface registration method and has 6 adjustable screws to adjust the orientation and length of the drilling support device. A mechanism was designed to hold it in place on the spinal process. A virtual prototype was build with CATIA software. During the operation, the surgeon places either template on patient's vertebra until a perfect match is obtained before drilling. The second design seems better than the first one because it can be reused on different vertebra and is less sensible to registration errors. The next step is to build the second design and make experimental and simulations tests to evaluate the benefits of this template during a scoliosis operation.

  14. A VVWBO-BVO-based GM (1,1) and its parameter optimization by GRA-IGSA integration algorithm for annual power load forecasting

    PubMed Central

    Wang, Hongguang

    2018-01-01

    Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450

  15. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  16. Electronic circuitry development in a micropyrotechnic system for micropropulsion applications

    NASA Astrophysics Data System (ADS)

    Puig-Vidal, Manuel; Lopez, Jaime; Miribel, Pere; Montane, Enric; Lopez-Villegas, Jose M.; Samitier, Josep; Rossi, Carole; Camps, Thierry; Dumonteuil, Maxime

    2003-04-01

    An electronic circuitry is proposed and implemented to optimize the ignition process and the robustness of a microthruster. The principle is based on the integration of propellant material within a micromachined system. The operational concept is simply based on the combustion of an energetic propellant stored in a micromachined chamber. Each thruster contains three parts (heater, chamber, nozzle). Due to the one shot characteristic, microthrusters are fabricated in 2D array configuration. For the functioning of this kind of system, one critical point is the optimization of the ignition process as a function of the power schedule delivered by electronic devices. One particular attention has been paid on the design and implementation of an electronic chip to control and optimize the system ignition. Ignition process is triggered by electrical power delivered to a polysilicon resistance in contact with the propellant. The resistance is used to sense the temperature on the propellant which is in contact. Temperature of the microthruster node before the ignition is monitored via the electronic circuitry. A pre-heating process before ignition seems to be a good methodology to optimize the ignition process. Pre-heating temperature and pre-heating time are critical parameters to be adjusted. Simulation and experimental results will deeply contribute to improve the micropyrotechnic system. This paper will discuss all these point.

  17. Mission planning optimization of video satellite for ground multi-object staring imaging

    NASA Astrophysics Data System (ADS)

    Cui, Kaikai; Xiang, Junhua; Zhang, Yulin

    2018-03-01

    This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.

  18. The role of ozone pretreatment on optimization of membrane bioreactor for treatment of oil sands process-affected water.

    PubMed

    Zhang, Yanyan; Xue, Jinkai; Liu, Yang; Gamal El-Din, Mohamed

    2018-04-05

    Previously, anoxic-aerobic membrane bioreactor (MBR) coupled with mild ozonation pretreatment has been applied to remove toxic naphthenic acids (NAs) in oil sands process-affected water (OSPW). To further improve MBR performance, the optimal operation conditions including hydraulic retention time (HRT) and initial ammonia nitrogen (NH 4 + -N) need to be explored. In this study, the role of ozone pretreatment on MBR optimization was investigated. Compared with MBR treating raw OSPW, MBR treating ozonated OSPW had the same optimal operation conditions (HRT of 12 h and NH 4 + -N concentration of 25 mg/L). Nevertheless, MBR performance benefited from HRT adjustment more after ozone pretreatment. HRT adjustment resulted in NA removal in the range of 33-50% for the treatment of ozonated OSPW whereas NA removal for raw OSPW only fluctuated between 27% and 38%. Compared with the removal of classical NAs, the degradation of oxidized NAs was more sensitive to the adjustment of operation conditions. Adjusting HRT increased the removal of oxidized NAs in ozonated OSPW substantially (from 6% to 35%). It was also noticed that microbial communities in MBR treating ozonated OSPW were more responsive to the adjustment of operation conditions as indicated by the noticeable increase of Shannon index and extended genetic distances. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Determination of full piezoelectric complex parameters using gradient-based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.

    2016-02-01

    At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.

  20. Is temporary employment related to health status? Analysis of the Northern Swedish Cohort.

    PubMed

    Waenerlund, Anna-Karin; Virtanen, Pekka; Hammarström, Anne

    2011-07-01

    The aim of this study was to investigate whether temporary employment was related to non-optimal self-rated health and psychological distress at age 42 after adjustment for the same indicators at age 30, and to analyze the effects of job insecurity, low cash margin and high job strain on this relationship. A subcohort of the Northern Swedish Cohort that was employed at the 2007 follow-up survey (n = 907, response rate of 94%) was analyzed using data from 1995 and 2007 questionnaires. Temporary employees had a higher risk of both non-optimal self-rated health and psychological distress. After adjustment for non-optimal self-rated health at age 30 and psychological distress at age 30 as well as for sociodemographic variables, the odds ratios decreased but remained significant. However, after adjustment for job insecurity, high job strain and low cash margin the odds ratio dropped for non-optimal self-rated health but remained significant for psychological distress. Temporary employment may have adverse effects on self-rated health and psychological health after adjustment for previous health status and sociodemographic variables. Our findings indicate that low cash margin and job insecurity may partially mediate the association between temporary employment and health status.

  1. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  2. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    USGS Publications Warehouse

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured the mean bacteria transport behavior and calculated an envelope of uncertainty that bracketed the observations in most simulation cases. ?? 2007 American Chemical Society.

  3. Determining β-lactam exposure threshold to suppress resistance development in Gram-negative bacteria.

    PubMed

    Tam, Vincent H; Chang, Kai-Tai; Zhou, Jian; Ledesma, Kimberly R; Phe, Kady; Gao, Song; Van Bambeke, Françoise; Sánchez-Díaz, Ana María; Zamorano, Laura; Oliver, Antonio; Cantón, Rafael

    2017-05-01

    β-Lactams are commonly used for nosocomial infections and resistance to these agents among Gram-negative bacteria is increasing rapidly. Optimized dosing is expected to reduce the likelihood of resistance development during antimicrobial therapy, but the target for clinical dose adjustment is not well established. We examined the likelihood that various dosing exposures would suppress resistance development in an in vitro hollow-fibre infection model. Two strains of Klebsiella pneumoniae and two strains of Pseudomonas aeruginosa (baseline inocula of ∼10 8  cfu/mL) were examined. Various dosing exposures of cefepime, ceftazidime and meropenem were simulated in the hollow-fibre infection model. Serial samples were obtained to ascertain the pharmacokinetic simulations and viable bacterial burden for up to 120 h. Drug concentrations were determined by a validated LC-MS/MS assay and the simulated exposures were expressed as C min /MIC ratios. Resistance development was detected by quantitative culture on drug-supplemented media plates (at 3× the corresponding baseline MIC). The C min /MIC breakpoint threshold to prevent bacterial regrowth was identified by classification and regression tree (CART) analysis. For all strains, the bacterial burden declined initially with the simulated exposures, but regrowth was observed in 9 out of 31 experiments. CART analysis revealed that a C min /MIC ratio ≥3.8 was significantly associated with regrowth prevention (100% versus 44%, P  = 0.001). The development of β-lactam resistance during therapy could be suppressed by an optimized dosing exposure. Validation of the proposed target in a well-designed clinical study is warranted. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Novel Dynamic Framed-Slotted ALOHA Using Litmus Slots in RFID Systems

    NASA Astrophysics Data System (ADS)

    Yim, Soon-Bin; Park, Jongho; Lee, Tae-Jin

    Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular protocols to resolve tag collisions in RFID systems. In DFSA, it is widely known that the optimal performance is achieved when the frame size is equal to the number of tags. So, a reader dynamically adjusts the next frame size according to the current number of tags. Thus it is important to estimate the number of tags exactly. In this paper, we propose a novel tag estimation and identification method using litmus (test) slots for DFSA. We compare the performance of the proposed method with those of existing methods by analysis. We conduct simulations and show that our scheme improves the speed of tag identification.

  5. Neural basis of quasi-rational decision making.

    PubMed

    Lee, Daeyeol

    2006-04-01

    Standard economic theories conceive homo economicus as a rational decision maker capable of maximizing utility. In reality, however, people tend to approximate optimal decision-making strategies through a collection of heuristic routines. Some of these routines are driven by emotional processes, and others are adjusted iteratively through experience. In addition, routines specialized for social decision making, such as inference about the mental states of other decision makers, might share their origins and neural mechanisms with the ability to simulate or imagine outcomes expected from alternative actions that an individual can take. A recent surge of collaborations across economics, psychology and neuroscience has provided new insights into how such multiple elements of decision making interact in the brain.

  6. Image quality, threshold contrast and mean glandular dose in CR mammography

    NASA Astrophysics Data System (ADS)

    Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.

    2013-09-01

    In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both patient groups. Finally, this study also concluded that the use of the AEC of the x-ray unit based on the constant dose to the detector may bring some difficulties to CR systems to operate under optimal conditions. More studies must be performed, so that the compatibility between systems and optimization methodologies can be evaluated, as well as this optimization method. Most methods are developed for phantoms, so comparative studies including clinical images must be developed.

  7. Optimal Spatial Design of Capacity and Quantity of Rainwater Catchment Systems for Urban Flood Mitigation

    NASA Astrophysics Data System (ADS)

    Huang, C.; Hsu, N.

    2013-12-01

    This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.

  8. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  9. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  10. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  11. Cost-Effectiveness Analysis of Bariatric Surgery for Morbid Obesity.

    PubMed

    Alsumali, Adnan; Eguale, Tewodros; Bairdain, Sigrid; Samnaliev, Mihail

    2018-01-15

    In the USA, three types of bariatric surgeries are widely performed, including laparoscopic sleeve gastrectomy (LSG), laparoscopic Roux-en-Y gastric bypass (LRYGB), and laparoscopic adjustable gastric banding (LAGB). However, few economic evaluations of bariatric surgery are published. There is also scarcity of studies focusing on the LSG alone. Therefore, this study is evaluating the cost-effectiveness of bariatric surgery using LRYGB, LAGB, and LSG as treatment for morbid obesity. A microsimulation model was developed over a lifetime horizon to simulate weight change, health consequences, and costs of bariatric surgery for morbid obesity. US health care prospective was used. A model was propagated based on a report from the first report of the American College of Surgeons. Incremental cost-effectiveness ratios (ICERs) in terms of cost per quality-adjusted life-year (QALY) gained were used in the model. Model parameters were estimated from publicly available databases and published literature. LRYGB was cost-effective with higher QALYs (17.07) and cost ($138,632) than LSG (16.56 QALYs; $138,925), LAGB (16.10 QALYs; $135,923), and no surgery (15.17 QALYs; $128,284). Sensitivity analysis showed initial cost of surgery and weight regain assumption were very sensitive to the variation in overall model parameters. Across patient groups, LRYGB remained the optimal bariatric technique, except that with morbid obesity 1 (BMI 35-39.9 kg/m 2 ) patients, LSG was the optimal choice. LRYGB is the optimal bariatric technique, being the most cost-effective compared to LSG, LAGB, and no surgery options for most subgroups. However, LSG was the most cost-effective choice when initial BMI ranged between 35 and 39.9 kg/m 2 .

  12. Phase retrieval from intensity-only data by relative entropy minimization.

    PubMed

    Deming, Ross W

    2007-11-01

    A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.

  13. Development of a 5.5 m diameter vertical axis wind turbine, phase 3

    NASA Astrophysics Data System (ADS)

    Dekitsch, A.; Etzler, C. C.; Fritzsche, A.; Lorch, G.; Mueller, W.; Rogalla, K.; Schmelzle, J.; Schuhwerk, W.; Vollan, A.; Welte, D.

    1982-06-01

    In continuation of development of a 5.5 m diameter vertical axis windmill that consists in conception, building, and wind tunnel testing, a Darrieus rotor windpowered generator feeding an isolated network under different wind velocity conditions and with optimal energy conversion efficiency was designed built, and field tested. The three-bladed Darrieus rotor tested in the wind tunnel was equiped with two variable pitch Savonius rotors 2 m in diameter. By means of separate measures of the aerodynamic factors and the energy consumption, effect of revisions and optimizations on different elements was assessed. Pitch adjustement of the Savonius blades, lubrication of speed reducer, rotor speed at cut-in of generator field excitation, time constant of field excitation, stability conditions, switch points of ohmic resistors which combined with a small electric battery simulated a larger isolated network connected with a large storage battery, were investigated. Fundamentals for the economic series production of windpowered generators with Darrieus rotors for the control and the electric conversion system are presented.

  14. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  15. Cloud Model-Based Artificial Immune Network for Complex Optimization Problem

    PubMed Central

    Wang, Mingan; Li, Jianming; Guo, Dongliang

    2017-01-01

    This paper proposes an artificial immune network based on cloud model (AINet-CM) for complex function optimization problems. Three key immune operators—cloning, mutation, and suppression—are redesigned with the help of the cloud model. To be specific, an increasing half cloud-based cloning operator is used to adjust the dynamic clone multipliers of antibodies, an asymmetrical cloud-based mutation operator is used to control the adaptive evolution of antibodies, and a normal similarity cloud-based suppressor is used to keep the diversity of the antibody population. To quicken the searching convergence, a dynamic searching step length strategy is adopted. For comparative study, a series of numerical simulations are arranged between AINet-CM and the other three artificial immune systems, that is, opt-aiNet, IA-AIS, and AAIS-2S. Furthermore, two industrial applications—finite impulse response (FIR) filter design and proportional-integral-differential (PID) controller tuning—are investigated and the results demonstrate the potential searching capability and practical value of the proposed AINet-CM algorithm. PMID:28630620

  16. Fabrication of embedded microball lens in PMMA with high repetition rate femtosecond fiber laser.

    PubMed

    Zheng, Chong; Hu, Anming; Li, Ruozhou; Bridges, Denzel; Chen, Tao

    2015-06-29

    Embedded microball lenses with superior optical properties function as convex microball lens (VMBL) and concave microball lens (CMBL) were fabricated inside a PMMA substrate with a high repetition rate femtosecond fiber laser. The VMBL was created by femtosecond laser-induced refractive index change, while the CMBL was fabricated due to the heat accumulation effect of the successive laser pulses irradiation at a high repetition rate. The processing window for both types of the lenses was studied and optimized, and the optical properties were also tested by imaging a remote object with an inverted microscope. In order to obtain the microball lenses with adjustable focal lengths and suppressed optical aberration, a shape control method was thus proposed and examined with experiments and ZEMAX® simulations. Applying the optimized fabrication conditions, two types of the embedded microball lenses arrays were fabricated and then tested with imaging experiments. This technology allows the direct fabrication of microlens inside transparent bulk polymer material which has great application potential in multi-function integrated microfluidic devices.

  17. Cloud Model-Based Artificial Immune Network for Complex Optimization Problem.

    PubMed

    Wang, Mingan; Feng, Shuo; Li, Jianming; Li, Zhonghua; Xue, Yu; Guo, Dongliang

    2017-01-01

    This paper proposes an artificial immune network based on cloud model (AINet-CM) for complex function optimization problems. Three key immune operators-cloning, mutation, and suppression-are redesigned with the help of the cloud model. To be specific, an increasing half cloud-based cloning operator is used to adjust the dynamic clone multipliers of antibodies, an asymmetrical cloud-based mutation operator is used to control the adaptive evolution of antibodies, and a normal similarity cloud-based suppressor is used to keep the diversity of the antibody population. To quicken the searching convergence, a dynamic searching step length strategy is adopted. For comparative study, a series of numerical simulations are arranged between AINet-CM and the other three artificial immune systems, that is, opt-aiNet, IA-AIS, and AAIS-2S. Furthermore, two industrial applications-finite impulse response (FIR) filter design and proportional-integral-differential (PID) controller tuning-are investigated and the results demonstrate the potential searching capability and practical value of the proposed AINet-CM algorithm.

  18. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  19. Anticipating on amplifying water stress: Optimal crop production supported by anticipatory water management

    NASA Astrophysics Data System (ADS)

    Bartholomeus, Ruud; van den Eertwegh, Gé; Simons, Gijs

    2015-04-01

    Agricultural crop yields depend largely on the soil moisture conditions in the root zone. Drought but especially an excess of water in the root zone and herewith limited availability of soil oxygen reduces crop yield. With ongoing climate change, more prolonged dry periods alternate with more intensive rainfall events, which changes soil moisture dynamics. With unaltered water management practices, reduced crop yield due to both drought stress and waterlogging will increase. Therefore, both farmers and water management authorities need to be provided with opportunities to reduce risks of decreasing crop yields. In The Netherlands, agricultural production of crops represents a market exceeding 2 billion euros annually. Given the increased variability in meteorological conditions and the resulting larger variations in soil moisture contents, it is of large economic importance to provide farmers and water management authorities with tools to mitigate risks of reduced crop yield by anticipatory water management, both at field and at regional scale. We provide the development and the field application of a decision support system (DSS), which allows to optimize crop yield by timely anticipation on drought and waterlogging situations. By using this DSS, we will minimize plant water stress through automated drainage and irrigation management. In order to optimize soil moisture conditions for crop growth, the interacting processes in the soil-plant-atmosphere system need to be considered explicitly. Our study comprises both the set-up and application of the DSS on a pilot plot in The Netherlands, in order to evaluate its implementation into daily agricultural practice. The DSS focusses on anticipatory water management at the field scale, i.e. the unit scale of interest to a farmer. We combine parallel field measurements ('observe'), process-based model simulations ('predict'), and the novel Climate Adaptive Drainage (CAD) system ('adjust') to optimize soil moisture conditions. CAD is used both for controlled drainage practices and for sub-irrigation. The DSS has a core of the plot-scale SWAP model (soil-water-atmosphere-plant), extended with a process-based module for the simulation of oxygen stress for plant roots. This module involves macro-scale and micro-scale gas diffusion, as well as the plant physiological demand of oxygen, to simulate transpiration reduction due to limited oxygen availability. Continuous measurements of soil moisture content, groundwater level, and drainage level are used to calibrate the SWAP model each day. This leads to an optimal reproduction of the actual soil moisture conditions by data assimilation in the first step in the DSS process. During the next step, near-future (+10 days) soil moisture conditions and drought and oxygen stress are predicted using weather forecasts. Finally, optimal drainage levels to minimize stress are simulated, which can be established by CAD. Linkage to a grid-based hydrological simulation model (SPHY) facilitates studying the spatial dynamics of soil moisture and associated implications for management at the regional scale. Thus, by using local-scale measurements, process-based models and weather forecasts to anticipate on near-future conditions, not only field-scale water management but also regional surface water management can be optimized both in space and time.

  20. Conditioning geostatistical simulations of a heterogeneous paleo-fluvial bedrock aquifer using lithologs and pumping tests

    NASA Astrophysics Data System (ADS)

    Niazi, A.; Bentley, L. R.; Hayashi, M.

    2016-12-01

    Geostatistical simulations are used to construct heterogeneous aquifer models. Optimally, such simulations should be conditioned with both lithologic and hydraulic data. We introduce an approach to condition lithologic geostatistical simulations of a paleo-fluvial bedrock aquifer consisting of relatively high permeable sandstone channels embedded in relatively low permeable mudstone using hydraulic data. The hydraulic data consist of two-hour single well pumping tests extracted from the public water well database for a 250-km2 watershed in Alberta, Canada. First, lithologic models of the entire watershed are simulated and conditioned with hard lithological data using transition probability - Markov chain geostatistics (TPROGS). Then, a segment of the simulation around a pumping well is used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone are then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated pumping test data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each well that has pumping test data. The method creates a local groundwater model that honors both the lithologic model and pumping test data and provides estimates of hydraulic conductivity and specific storage. Eventually, the simulations will be integrated into a watershed-scale groundwater model.

  1. Optimal In-Hospital and Discharge Medical Therapy in Acute Coronary Syndromes in Kerala: Results from the Kerala ACS Registry

    PubMed Central

    Huffman, Mark D; Prabhakaran, Dorairaj; Abraham, AK; Krishnan, Mangalath Narayanan; Nambiar, C. Asokan; Mohanan, Padinhare Purayil

    2013-01-01

    Background In-hospital and post-discharge treatment rates for acute coronary syndrome (ACS) remain low in India. However, little is known about the prevalence and predictors of the package of optimal ACS medical care in India. Our objective was to define the prevalence, predictors, and impact of optimal in-hospital and discharge medical therapy in the Kerala ACS Registry of 25,718 admissions. Methods and Results We defined optimal in-hospital ACS medical therapy as receiving the following five medications: aspirin, clopidogrel, heparin, beta-blocker, and statin. We defined optimal discharge ACS medical therapy as receiving all of the above therapies except heparin. Comparisons by optimal vs. non-optimal ACS care were made via Student’s t test for continuous variables and chi-square test for categorical variables. We created random effects logistic regression models to evaluate the association between GRACE risk score variables and optimal in-hospital or discharge medical therapy. Optimal in-hospital and discharge medical care was delivered in 40% and 46% of admissions, respectively. Wide variability in both in-hospital and discharge medical care was present with few hospitals reaching consistently high (>90%) levels. Patients receiving optimal in-hospital medical therapy had an adjusted OR (95%CI)=0.93 (0.71, 1.22) for in-hospital death and an adjusted OR (95%CI)=0.79 (0.63, 0.99) for MACE. Patients who received optimal in-hospital medical care were far more likely to receive optimal discharge care (adjusted OR [95%CI]=10.48 [9.37, 11.72]). Conclusions Strategies to improve in-hospital and discharge medical therapy are needed to improve local process-of-care measures and improve ACS outcomes in Kerala. PMID:23800985

  2. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  3. The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control

    PubMed Central

    Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til

    2017-01-01

    Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910

  4. Historically hottest summers projected to be the norm for more than half of the world’s population within 20 years

    DOE PAGES

    Mueller, Brigitte; Zhang, Xuebin; Zwiers, Francis W.

    2016-04-07

    We project that within the next two decades, half of the world's population will regularly (every second summer on average) experience regional summer mean temperatures that exceed those of the historically hottest summer, even under the moderate RCP4.5 emissions pathway. This frequency threshold for hot temperatures over land, which have adverse effects on human health, society and economy, might be broached in little more than a decade under the RCP8.5 emissions pathway. These hot summer frequency projections are based on adjusted RCP4.5 and 8.5 temperature projections, where the adjustments are performed with scaling factors determined by regularized optimal fingerprinting analyzesmore » that compare historical model simulations with observations over the period 1950-2012. A temperature reconstruction technique is then used to simulate a multitude of possible past and future temperature evolutions, from which the probability of a hot summer is determined for each region, with a hot summer being defined as the historically warmest summer on record in that region. Probabilities with and without external forcing show that hot summers are now about ten times more likely (fraction of attributable risk 0.9) in many regions of the world than they would have been in the absence of past greenhouse gas increases. In conclusion, the adjusted future projections suggest that the Mediterranean, Sahara, large parts of Asia and the Western US and Canada will be among the first regions for which hot summers will become the norm (i.e. occur on average every other year), and that this will occur within the next 1-2 decades.« less

  5. Historically hottest summers projected to be the norm for more than half of the world’s population within 20 years

    NASA Astrophysics Data System (ADS)

    Mueller, Brigitte; Zhang, Xuebin; Zwiers, Francis W.

    2016-04-01

    We project that within the next two decades, half of the world’s population will regularly (every second summer on average) experience regional summer mean temperatures that exceed those of the historically hottest summer, even under the moderate RCP4.5 emissions pathway. This frequency threshold for hot temperatures over land, which have adverse effects on human health, society and economy, might be broached in little more than a decade under the RCP8.5 emissions pathway. These hot summer frequency projections are based on adjusted RCP4.5 and 8.5 temperature projections, where the adjustments are performed with scaling factors determined by regularized optimal fingerprinting analyzes that compare historical model simulations with observations over the period 1950-2012. A temperature reconstruction technique is then used to simulate a multitude of possible past and future temperature evolutions, from which the probability of a hot summer is determined for each region, with a hot summer being defined as the historically warmest summer on record in that region. Probabilities with and without external forcing show that hot summers are now about ten times more likely (fraction of attributable risk 0.9) in many regions of the world than they would have been in the absence of past greenhouse gas increases. The adjusted future projections suggest that the Mediterranean, Sahara, large parts of Asia and the Western US and Canada will be among the first regions for which hot summers will become the norm (i.e. occur on average every other year), and that this will occur within the next 1-2 decades.

  6. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

    PubMed

    Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

    2017-12-28

    Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.

  7. The design of transfer trajectory for Ivar asteroid exploration mission

    NASA Astrophysics Data System (ADS)

    Qiao, Dong; Cui, Hutao; Cui, Pingyuan

    2009-12-01

    An impending demand for exploring the small bodies, such as the comets and the asteroids, envisioned the Chinese Deep Space exploration mission to the Near Earth asteroid Ivar. A design and optimal method of transfer trajectory for asteroid Ivar is discussed in this paper. The transfer trajectory for rendezvous with asteroid Ivar is designed by means of Earth gravity assist with deep space maneuver (Delta-VEGA) technology. A Delta-VEGA transfer trajectory is realized by several trajectory segments, which connect the deep space maneuver and swingby point. Each trajectory segment is found by solving Lambert problem. Through adjusting deep maneuver and arrival time, the match condition of swingby is satisfied. To reduce the total mission velocity increments further, a procedure is developed which minimizes total velocity increments for this scheme of transfer trajectory for asteroid Ivar. The trajectory optimization problem is solved with a quasi-Newton algorithm utilizing analytic first derivatives, which are derived from the transversality conditions associated with the optimization formulation and primer vector theory. The simulation results show the scheme for transfer trajectory causes C3 and total velocity increments decrease of 48.80% and 13.20%, respectively.

  8. Design optimization of PVDF-based piezoelectric energy harvesters.

    PubMed

    Song, Jundong; Zhao, Guanxing; Li, Bo; Wang, Jin

    2017-09-01

    Energy harvesting is a promising technology that powers the electronic devices via scavenging the ambient energy. Piezoelectric energy harvesters have attracted considerable interest for their high conversion efficiency and easy fabrication in minimized sensors and transducers. To improve the output capability of energy harvesters, properties of piezoelectric materials is an influential factor, but the potential of the material is less likely to be fully exploited without an optimized configuration. In this paper, an optimization strategy for PVDF-based cantilever-type energy harvesters is proposed to achieve the highest output power density with the given frequency and acceleration of the vibration source. It is shown that the maximum power output density only depends on the maximum allowable stress of the beam and the working frequency of the device, and these two factors can be obtained by adjusting the geometry of piezoelectric layers. The strategy is validated by coupled finite-element-circuit simulation and a practical device. The fabricated device within a volume of 13.1 mm 3 shows an output power of 112.8 μW which is comparable to that of the best-performing piezoceramic-based energy harvesters within the similar volume reported so far.

  9. Organic antireflective coatings for 193-nm lithography

    NASA Astrophysics Data System (ADS)

    Trefonas, Peter, III; Blacksmith, Robert F.; Szmanda, Charles R.; Kavanagh, Robert J.; Adams, Timothy G.; Taylor, Gary N.; Coley, Suzanne; Pohlers, Gerd

    1999-06-01

    Organic anti-reflective coatings (ARCs) continue to play an important role in semiconductor manufacturing. These materials provide a convenient means of greatly reducing the resist photospeed swing and reflective notching. In this paper, we describe a novel class of ARC materials optimized for lithographic applications using 193 nm exposure tools. These ARCs are based upon polymers containing hydroxyl-alkyl methacrylate monomers for crosslinkable sites, styrene for a chromophore at 193 nm, and additional alkyl-methacrylate monomers as property modifiers. A glycouril crosslinker and a thermally-activated acidic catalyst provide a route to forming an impervious crosslinked film activate data high bake temperatures. ARC compositions can be adjusted to optimize the film's real and imaginary refractive indices. Selection of optimal target indices for 193 nm lithographic processing through simulations is described. Potential chromophores for 193 nm were explored using ZNDO modeling. We show how these theoretical studies were combined with material selection criteria to yield a versatile organic anti-reflectant film, Shipley 193 G0 ARC. Lithographic process data indicates the materials is capable of supporting high resolution patterning, with the line features displaying a sharp resist/ARC interface with low line edge roughness. The resist Eo swing is successfully reduced from 43 percent to 6 percent.

  10. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  11. Closed loop models for analyzing the effects of simulator characteristics. [digital simulation of human operators

    NASA Technical Reports Server (NTRS)

    Baron, S.; Muralidharan, R.; Kleinman, D. L.

    1978-01-01

    The optimal control model of the human operator is used to develop closed loop models for analyzing the effects of (digital) simulator characteristics on predicted performance and/or workload. Two approaches are considered: the first utilizes a continuous approximation to the discrete simulation in conjunction with the standard optimal control model; the second involves a more exact discrete description of the simulator in a closed loop multirate simulation in which the optimal control model simulates the pilot. Both models predict that simulator characteristics can have significant effects on performance and workload.

  12. Modeling the economic and health consequences of managing chronic osteoarthritis pain with opioids in Germany: comparison of extended-release oxycodone and OROS hydromorphone.

    PubMed

    Ward, Alexandra; Bozkaya, Duygu; Fleischmann, Jochen; Dubois, Dominique; Sabatowski, Rainer; Caro, J Jaime

    2007-10-01

    The Osmotic controlled-Release Oral delivery System (OROS) hydromorphone ensures continuous release of hydromorphone over 24 hours. It is anticipated that this will facilitate optimal pain relief, improve quality of sleep and compliance. This simulation compared managing chronic osteoarthritis pain with once-daily OROS hydromorphone with an equianalgesic dose of extended-release (ER) oxycodone administered two or three times a day. This discrete event simulation follows patients for a year after initiating opioid treatment. Pairs of identical patients are created; one receives OROS hydromorphone the other ER oxycodone; undergo dose adjustments and after titration can be dissatisfied or satisfied, suffer adverse events, pain recurrence, or discontinue the opioid. Each is assigned an initial sleep problems score, and an improved score from a treatment dependent distribution at the end of titration; these are translated to a utility value. Utilities are assigned pre-treatment, updated until the patient reaches the optimal dose or is non-compliant or dissatisfied. The OROS hydromorphone and ER oxycodone doses are converted to equianalgesic morphine doses using the following ratios: hydromorphone to morphine ratio; 1:5, oxycodone to morphine ratio; 1:2. Sensitivity analyses explored uncertainty in the conversion ratios and other key parameters. Direct medical costs are in 2005 euros. Over 1 year on a mean daily morphine-equivalent dose of 90 mg, 14% were estimated to be dissatisfied with each opioid. OROS hydromorphone was predicted to yield 0.017 additional quality-adjusted life years (QALYs)/patient for a small additional annual cost (E141/patient), yielding an incremental cost-effectiveness ratio (ICER) of E8343/QALY gained. Changing the assumed conversion ratio for oxycodone:morphine to 1:1.5 led to lower net costs of E68 per patient, E3979/QALY, and for hydromorphone to 1:7.5 to savings. Based on these analyses, OROS hydromorphone is expected to yield health benefits at reasonable cost in Germany.

  13. Response Adjusted for Days of Antibiotic Risk (RADAR): evaluation of a novel method to compare strategies to optimize antibiotic use.

    PubMed

    Schweitzer, V A; van Smeden, M; Postma, D F; Oosterheert, J J; Bonten, M J M; van Werkhoven, C H

    2017-12-01

    The Response Adjusted for Days of Antibiotic Risk (RADAR) statistic was proposed to improve the efficiency of trials comparing antibiotic stewardship strategies to optimize antibiotic use. We studied the behaviour of RADAR in a non-inferiority trial in which a β-lactam monotherapy strategy (n = 656) was non-inferior to fluoroquinolone monotherapy (n = 888) for patients with moderately severe community-acquired pneumonia. Patients were ranked according to clinical outcome, using five or eight categories, and antibiotic use. RADAR was calculated as the probability that the β-lactam group had a more favourable ranking than the fluoroquinolone group. To investigate the sensitivity of RADAR to detrimental clinical outcome we simulated increasing rates of 90-day mortality in the β-lactam group and performed the RADAR and non-inferiority analysis. The RADAR of the β-lactam group compared with the fluoroquinolone group was 60.3% (95% CI 57.9%-62.7%) using five and 58.4% (95% CI 56.0%-60.9%) using eight clinical outcome categories, all in favour of β-lactam. Sample sizes for RADAR were 38% (250/653) and 89% (580/653) of the non-inferiority sample size calculation, using five or eight clinical outcome categories, respectively. With simulated mortality rates, loss of non-inferiority of the β-lactam group occurred at a relative risk of 1.125 in the conventional analysis, whereas using RADAR the β-lactam group lost superiority at a relative risk of mortality of 1.25 and 1.5, with eight and five clinical outcome categories, respectively. RADAR favoured β-lactam over fluoroquinolone therapy for community-acquired pneumonia. Although RADAR required fewer patients than conventional non-inferiority analysis, the statistic was less sensitive to detrimental outcomes. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  14. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  15. An optimal control approach to the design of moving flight simulators

    NASA Technical Reports Server (NTRS)

    Sivan, R.; Ish-Shalom, J.; Huang, J.-K.

    1982-01-01

    An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.

  16. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  17. Optical fringe-reflection deflectometry with bundle adjustment

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  18. Force Field for Peptides and Proteins based on the Classical Drude Oscillator

    PubMed Central

    Lopes, Pedro E.M.; Huang, Jing; Shim, Jihyun; Luo, Yun; Li, Hui; Roux, Benoît; MacKerell, Alexander D.

    2013-01-01

    Presented is a polarizable force field based on a classical Drude oscillator framework, currently implemented in the programs CHARMM and NAMD, for modeling and molecular dynamics (MD) simulation studies of peptides and proteins. Building upon parameters for model compounds representative of the functional groups in proteins, the development of the force field focused on the optimization of the parameters for the polypeptide backbone and the connectivity between the backbone and side chains. Optimization of the backbone electrostatic parameters targeted quantum mechanical conformational energies, interactions with water, molecular dipole moments and polarizabilities and experimental condensed phase data for short polypeptides such as (Ala)5. Additional optimization of the backbone φ, ψ conformational preferences included adjustments of the tabulated two-dimensional spline function through the CMAP term. Validation of the model included simulations of a collection of peptides and proteins. This 1st generation polarizable model is shown to maintain the folded state of the studied systems on the 100 ns timescale in explicit solvent MD simulations. The Drude model typically yields larger RMS differences as compared to the additive CHARMM36 force field (C36) and shows additional flexibility as compared to the additive model. Comparison with NMR chemical shift data shows a small degradation of the polarizable model with respect to the additive, though the level of agreement may be considered satisfactory, while for residues shown to have significantly underestimated S2 order parameters in the additive model, improvements are calculated with the polarizable model. Analysis of dipole moments associated with the peptide backbone and tryptophan side chains show the Drude model to have significantly larger values than those present in C36, with the dipole moments of the peptide backbone enhanced to a greater extent in sheets versus helices and the dipoles of individual moieties observed to undergo significant variations during the MD simulations. Although there are still some limitations, the presented model, termed Drude-2013, is anticipated to yield a molecular picture of peptide and protein structure and function that will be of increased physical validity and internal consistency in a computationally accessible fashion. PMID:24459460

  19. Physical activity after myocardial infarction: is it related to mental health?

    PubMed

    Rius-Ottenheim, Nathaly; Geleijnse, Johanna M; Kromhout, Daan; van der Mast, Roos C; Zitman, Frans G; Giltay, Erik J

    2013-06-01

    Physical inactivity and poor mental wellbeing are associated with poorer prognoses in patients with cardiovascular disease. We aimed to analyse the cross-sectional and prospective associations between physical activity and mental wellbeing in patients with a history of myocardial infarction. Longitudinal, observational study. We investigated 600 older subjects with a history of myocardial infarction (age range 60-80 years) who participated in the Alpha Omega Trial (AOT). They were tested twice at baseline and at 40 months follow-up for physical activity - with the Physical Activity Scale for the Elderly (PASE); depressive symptoms - with the Geriatric Depression Scale (GDS-15); and dispositional optimism - with the Life Orientation Test (LOT-R). Linear (multilevel) and logistic regression analyses were used to examine cross-sectional and longitudinal associations. Physical activity was cross-sectionally associated with depressive symptoms (adjusted beta = -0.143; p = 0.001), but not with dispositional optimism (adjusted beta = 0.074; p = 0.07). We found a synchrony of change between physical activity and depressive symptoms (adjusted beta = -0.155; p < 0.001), but not with dispositional optimism (adjusted beta = 0.049; p = 0.24). Baseline physical activity did not predict depressive symptoms at 40 months follow-up. Concordant inverse associations were observed for (changes) in physical activity and depressive symptoms. Physical activity did not predict depressive symptoms or low optimism.

  20. Study on Operation Optimization of Pumping Station's 24 Hours Operation under Influences of Tides and Peak-Valley Electricity Prices

    NASA Astrophysics Data System (ADS)

    Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang

    2010-06-01

    According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.

  1. Heat transfer measurements for Stirling machine cylinders

    NASA Technical Reports Server (NTRS)

    Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.

    1994-01-01

    The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.

  2. Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.

  3. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  4. Voltage oriented control of self-excited induction generator for wind energy system with MPPT

    NASA Astrophysics Data System (ADS)

    Amieur, Toufik; Taibi, Djamel; Amieur, Oualid

    2018-05-01

    This paper presents the study and simulation of the self-excited induction generator in the wind power production in isolated sites. With this intention, a model of the wind turbine was established. Extremum-seeking control algorithm method by using Maximum Power Point Tracking (MPPT) is proposed control solution aims at driving the average position of the operating point near to optimality. The reference of turbine rotor speed is adjusted such that the turbine operates around maximum power for the current wind speed value. After a brief review of the concepts of converting wind energy into electrical energy. The proposed modeling tools were developed to study the performance of standalone induction generators connected to capacitor bank. The purpose of this technique is to maintain a constant voltage at the output of the rectifier whatever the loads and speeds. The system studied in this work is developed and tested in MATLAB/Simulink environment. Simulation results validate the performance and effectiveness of the proposed control methods.

  5. Finite Element Analysis and Vibration Control of Lorry’s Shift Mechanism

    NASA Astrophysics Data System (ADS)

    Qiangwei, Li

    2017-11-01

    The transmission is one of the important parts of the automobile’s transmission system, Shift mechanism’s main function of transmission is to adjust the position of the shift fork, toggle the synchronizer’s tooth ring, so that the gears are separated and combined to achieve the shift. Therefore, in order to ensure the reliability and stability of the shift process, the vibration characteristics of the shift mechanism cannot be ignored. The static analysis of the shift fork is carried out, and the stress distribution of the shift fork is obtained according to the operating characteristics of the shift mechanism of the lorry transmission in this paper. The modal analysis of the shift mechanism shows the low-order vibration frequencies and the corresponding modal vibration shapes, and the vibration control analysis is carried out according to the simulation results. The simulation results provide the theoretical basis for the reasonable optimization design of the shift mechanism of the lorry transmission.

  6. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  7. Comparison of Flight Simulators Based on Human Motion Perception Metrics

    NASA Technical Reports Server (NTRS)

    Valente Pais, Ana R.; Correia Gracio, Bruno J.; Kelly, Lon C.; Houck, Jacob A.

    2015-01-01

    In flight simulation, motion filters are used to transform aircraft motion into simulator motion. When looking for the best match between visual and inertial amplitude in a simulator, researchers have found that there is a range of inertial amplitudes, rather than a single inertial value, that is perceived by subjects as optimal. This zone, hereafter referred to as the optimal zone, seems to correlate to the perceptual coherence zones measured in flight simulators. However, no studies were found in which these two zones were compared. This study investigates the relation between the optimal and the coherence zone measurements within and between different simulators. Results show that for the sway axis, the optimal zone lies within the lower part of the coherence zone. In addition, it was found that, whereas the width of the coherence zone depends on the visual amplitude and frequency, the width of the optimal zone remains constant.

  8. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  9. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  10. Optimizing Chromatographic Separation: An Experiment Using an HPLC Simulator

    ERIC Educational Resources Information Center

    Shalliker, R. A.; Kayillo, S.; Dennis, G. R.

    2008-01-01

    Optimization of a chromatographic separation within the time constraints of a laboratory session is practically impossible. However, by employing a HPLC simulator, experiments can be designed that allow students to develop an appreciation of the complexities involved in optimization procedures. In the present exercise, a HPLC simulator from "JCE…

  11. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  12. Intelligent Systems Approach for Automated Identification of Individual Control Behavior of a Human Operator

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Results have been obtained using conventional techniques to model the generic human operator?s control behavior, however little research has been done to identify an individual based on control behavior. The hypothesis investigated is that different operators exhibit different control behavior when performing a given control task. Two enhancements to existing human operator models, which allow personalization of the modeled control behavior, are presented. One enhancement accounts for the testing control signals, which are introduced by an operator for more accurate control of the system and/or to adjust the control strategy. This uses the Artificial Neural Network which can be fine-tuned to model the testing control. Another enhancement takes the form of an equiripple filter which conditions the control system power spectrum. A novel automated parameter identification technique was developed to facilitate the identification process of the parameters of the selected models. This utilizes a Genetic Algorithm based optimization engine called the Bit-Climbing Algorithm. Enhancements were validated using experimental data obtained from three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. This manuscript also addresses applying human operator models to evaluate the effectiveness of motion feedback when simulating actual pilot control behavior in a flight simulator.

  13. Full-scale simulation of seawater reverse osmosis desalination processes for boron removal: Effect of membrane fouling.

    PubMed

    Park, Pyung-Kyu; Lee, Sangho; Cho, Jae-Seok; Kim, Jae-Hong

    2012-08-01

    The objective of this study is to further develop previously reported mechanistic predictive model that simulates boron removal in full-scale seawater reverse osmosis (RO) desalination processes to take into account the effect of membrane fouling. Decrease of boron removal and reduction in water production rate by membrane fouling due to enhanced concentration polarization were simulated as a decrease in solute mass transfer coefficient in boundary layer on membrane surface. Various design and operating options under fouling condition were examined including single- versus double-pass configurations, different number of RO elements per vessel, use of RO membranes with enhanced boron rejection, and pH adjustment. These options were quantitatively compared by normalizing the performance of the system in terms of E(min), the minimum energy costs per product water. Simulation results suggested that most viable options to enhance boron rejection among those tested in this study include: i) minimizing fouling, ii) exchanging the existing SWRO elements to boron-specific ones, and iii) increasing pH in the second pass. The model developed in this study is expected to help design and optimization of the RO processes to achieve the target boron removal at target water recovery under realistic conditions where membrane fouling occurs during operation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Computational Fluid Dynamics (CFD) Simulation of Drag Reduction by Riblets on Automobile

    NASA Astrophysics Data System (ADS)

    Ghazali, N. N. N.; Yau, Y. H.; Badarudin, A.; Lim, Y. C.

    2010-05-01

    One of the ongoing automotive technological developments is the reduction of aerodynamic drag because this has a direct impact on fuel reduction, which is a major topic due to the influence on many other requirements. Passive drag reduction techniques stand as the most portable and feasible way to be implemented in real applications. One of the passive techniques is the longitudinal microgrooves aligned in the flow direction, known as riblets. In this study, the simulation of turbulent flows over an automobile in a virtual wind tunnel has been conducted by computational fluid dynamics (CFD). Three important aspects of this study are: the drag reduction effect of riblets on smooth surface automobile, the position and geometry of the riblets on drag reduction. The simulation involves three stages: geometry modeling, meshing, solving and analysis. The simulation results show that the attachment of riblets on the rear roof surface reduces the drag coefficient by 2.74%. By adjusting the attachment position of the riblets film, reduction rates between the range 0.5%-9.51% are obtained, in which the position of the top middle roof optimizes the effect. Four riblet geometries are investigated, among them the semi-hexagon trapezoidally shaped riblets is considered the most effective. Reduction rate of drag is found ranging from -3.34% to 6.36%.

  15. Simulations of Fuel Assembly and Fast-Electron Transport in Integrated Fast-Ignition Experiments on OMEGA

    NASA Astrophysics Data System (ADS)

    Solodov, A. A.; Theobald, W.; Anderson, K. S.; Shvydky, A.; Epstein, R.; Betti, R.; Myatt, J. F.; Stoeckl, C.; Jarrott, L. C.; McGuffey, C.; Qiao, B.; Beg, F. N.; Wei, M. S.; Stephens, R. B.

    2013-10-01

    Integrated fast-ignition experiments on OMEGA benefit from improved performance of the OMEGA EP laser, including higher contrast, higher energy, and a smaller focus. Recent 8-keV, Cu-Kα flash radiography of cone-in-shell implosions and cone-tip breakout measurements showed good agreement with the 2-D radiation-hydrodynamic simulations using the code DRACO. DRACO simulations show that the fuel assembly can be further improved by optimizing the compression laser pulse, evacuating air from the shell, and by adjusting the material of the cone tip. This is found to delay the cone-tip breakout by ~220 ps and increase the core areal density from ~80 mg/cm2 in the current experiments to ~500 mg/cm2 at the time of the OMEGA EP beam arrival before the cone-tip breakout. Simulations using the code LSP of fast-electron transport in the recent integrated OMEGA experiments with Cu-doped shells will be presented. Cu-doping is added to probe the transport of fast electrons via their induced Cu K-shell fluorescent emission. This material is based upon work supported by the Department of Energy National Nuclear Security Administration DE-NA0001944 and the Office of Science under DE-FC02-04ER54789.

  16. Numerical Simulation of Callus Healing for Optimization of Fracture Fixation Stiffness

    PubMed Central

    Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim

    2014-01-01

    The stiffness of fracture fixation devices together with musculoskeletal loading defines the mechanical environment within a long bone fracture, and can be quantified by the interfragmentary movement. In vivo results suggested that this can have acceleratory or inhibitory influences, depending on direction and magnitude of motion, indicating that some complications in fracture treatment could be avoided by optimizing the fixation stiffness. However, general statements are difficult to make due to the limited number of experimental findings. The aim of this study was therefore to numerically investigate healing outcomes under various combinations of shear and axial fixation stiffness, and to detect the optimal configuration. A calibrated and established numerical model was used to predict fracture healing for numerous combinations of axial and shear fixation stiffness under physiological, superimposed, axial compressive and translational shear loading in sheep. Characteristic maps of healing outcome versus fixation stiffness (axial and shear) were created. The results suggest that delayed healing of 3 mm transversal fracture gaps will occur for highly flexible or very rigid axial fixation, which was corroborated by in vivo findings. The optimal fixation stiffness for ovine long bone fractures was predicted to be 1000–2500 N/mm in the axial and >300 N/mm in the shear direction. In summary, an optimized, moderate axial stiffness together with certain shear stiffness enhances fracture healing processes. The negative influence of one improper stiffness can be compensated by adjustment of the stiffness in the other direction. PMID:24991809

  17. Corner smoothing of 2D milling toolpath using b-spline curve by optimizing the contour error and the feedrate

    NASA Astrophysics Data System (ADS)

    Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François

    2018-05-01

    In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.

  18. Numerical simulation of callus healing for optimization of fracture fixation stiffness.

    PubMed

    Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim

    2014-01-01

    The stiffness of fracture fixation devices together with musculoskeletal loading defines the mechanical environment within a long bone fracture, and can be quantified by the interfragmentary movement. In vivo results suggested that this can have acceleratory or inhibitory influences, depending on direction and magnitude of motion, indicating that some complications in fracture treatment could be avoided by optimizing the fixation stiffness. However, general statements are difficult to make due to the limited number of experimental findings. The aim of this study was therefore to numerically investigate healing outcomes under various combinations of shear and axial fixation stiffness, and to detect the optimal configuration. A calibrated and established numerical model was used to predict fracture healing for numerous combinations of axial and shear fixation stiffness under physiological, superimposed, axial compressive and translational shear loading in sheep. Characteristic maps of healing outcome versus fixation stiffness (axial and shear) were created. The results suggest that delayed healing of 3 mm transversal fracture gaps will occur for highly flexible or very rigid axial fixation, which was corroborated by in vivo findings. The optimal fixation stiffness for ovine long bone fractures was predicted to be 1000-2500 N/mm in the axial and >300 N/mm in the shear direction. In summary, an optimized, moderate axial stiffness together with certain shear stiffness enhances fracture healing processes. The negative influence of one improper stiffness can be compensated by adjustment of the stiffness in the other direction.

  19. Optimization of K-edge imaging for vulnerable plaques using gold nanoparticles and energy resolved photon counting detectors: a simulation study.

    PubMed

    Alivov, Yahya; Baturin, Pavlo; Le, Huy Q; Ducote, Justin; Molloi, Sabee

    2014-01-06

    We investigated the effect of different imaging parameters, such as dose, beam energy, energy resolution and the number of energy bins, on the image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. A maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of the plaque's inflammation. The simulation studies used a single-slice parallel beam CT geometry with an x-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33 × 24 cm(2)) phantoms, where both phantoms contained tissue, calcium and gold. In the simulation studies, GNP quantification and background (calcium and tissue) suppression tasks were pursued. The x-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% full width at half maximum (FWHM) energy resolution) implementations of the photon counting detector were simulated. The simulations were performed for the CdZnTe detector with a pixel pitch of 0.5-1 mm, which corresponds to a performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the x-ray beam energy (kVp) to achieve the highest signal-to-noise ratio with respect to the patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at the 125 kVp x-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 µmol mL(-1) (0.21 mg mL(-1)) for an ideal detector and about 2.5 µmol mL(-1) (0.49 mg mL(-1)) for a more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at the lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque.

  20. Monte Carlo simulations of mixtures involving ketones and aldehydes by a direct bubble pressure calculation.

    PubMed

    Ferrando, Nicolas; Lachet, Véronique; Boutin, Anne

    2010-07-08

    Ketone and aldehyde molecules are involved in a large variety of industrial applications. Because they are mainly present mixed with other compounds, the prediction of phase equilibrium of mixtures involving these classes of molecules is of first interest particularly to design and optimize separation processes. The main goal of this work is to propose a transferable force field for ketones and aldehydes that allows accurate molecular simulations of not only pure compounds but also complex mixtures. The proposed force field is based on the anisotropic united-atoms AUA4 potential developed for hydrocarbons, and it introduces only one new atom, the carbonyl oxygen. The Lennard-Jones parameters of this oxygen atom have been adjusted on saturated thermodynamic properties of both acetone and acetaldehyde. To simulate mixtures, Monte Carlo simulations are carried out in a specific pseudoensemble which allows a direct calculation of the bubble pressure. For polar mixtures involved in this study, we show that this approach is an interesting alternative to classical calculations in the isothermal-isobaric Gibbs ensemble. The pressure-composition diagrams of polar + polar and polar + nonpolar binary mixtures are well reproduced. Mutual solubilities as well as azeotrope location, if present, are accurately predicted without any empirical binary interaction parameters or readjustment. Such result highlights the transferability of the proposed force field, which is an essential feature toward the simulation of complex oxygenated mixtures of industrial interest.

  1. Three dimensional design, simulation and optimization of a novel, universal diabetic foot offloading orthosis

    NASA Astrophysics Data System (ADS)

    Sukumar, Chand; Ramachandran, K. I.

    2016-09-01

    Leg amputation is a major consequence of aggregated foot ulceration in diabetic patients. A common sense based treatment approach for diabetic foot ulceration is foot offloading where the patient is required to wear a foot offloading orthosis during the entire treatment course. Removable walker is an excellent foot offloading modality compared to the golden standard solution - total contact cast and felt padding. Commercially available foot offloaders are generally customized with huge cost and less patient compliance. This work suggests an optimized 3D model of a new type light weight removable foot offloading orthosis for diabetic patients. The device has simple adjustable features which make this suitable for wide range of patients with weight of 35 to 74 kg and height of 137 to 180 cm. Foot plate of this orthosis is unisexual, with a size adjustability of (US size) 6 to 10. Materials like Aluminum alloy 6061-T6, Acrylonitrile Butadiene Styrene (ABS) and Polyurethane acted as the key player in reducing weight of the device to 0.804 kg. Static analysis of this device indicated that maximum stress developed in this device under a load of 1000 N is only 37.8 MPa, with a small deflection of 0.150 cm and factor of safety of 3.28, keeping the safety limits, whereas dynamic analysis results assures the load bearing capacity of this device. Thus, the proposed device can be safely used as an orthosis for offloading diabetic ulcerated foot.

  2. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  3. Cooperation and competition between two symmetry breakings in a coupled ratchet

    NASA Astrophysics Data System (ADS)

    Li, Chen-Pu; Chen, Hong-Bin; Fan, Hong; Xie, Ge-Ying; Zheng, Zhi-Gang

    2018-03-01

    We investigate the collective mechanism of coupled Brownian motors in a flashing ratchet in the presence of coupling symmetry breaking and space symmetry breaking. The dependences of directed current on various parameters are extensively studied in terms of numerical simulations and theoretical analysis. Reversed motion can be achieved by modulating multiple parameters including the spatial asymmetry coefficient, the coupling asymmetry coefficient, the coupling free length and the coupling strength. The dynamical mechanism of these transport properties can be reasonably explained by the effective potential theory and the cooperation or competition between two symmetry breakings. Moreover, adjusting the Gaussian white noise intensity, which can induce weak reversed motion under certain condition, can optimize and manipulate the directed transport of the ratchet system.

  4. Power combination of a self-coherent high power microwave source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Xiaolu, E-mail: yanxl-dut@163.com; Zhang, Xiaoping; Li, Yangmei

    2015-09-15

    In our previous work, generating two phase-locked high power microwaves (HPMs) in a single self-coherent HPM device has been demonstrated. In this paper, after optimizing the structure of the previous self-coherent source, we design a power combiner with a folded phase-adjustment waveguide to realize power combination between its two sub-sources. Further particle-in-cell simulation of the combined source shows that when the diode voltage is 687 kV and the axial magnetic field is 0.8 T, a combined output microwave with 3.59 GW and 9.72 GHz is generated. The impedance of the combined device is 36 Ω and the total power conversion efficiency is 28%.

  5. Tuning the stability and the skyrmion Hall effect in magnetic skyrmions by adjusting their exchange strengths with magnetic disks

    NASA Astrophysics Data System (ADS)

    Sun, L.; Wu, H. Z.; Miao, B. F.; Wu, D.; Ding, H. F.

    2018-06-01

    Magnetic skyrmion is a promising candidate for the future information technology due to its small size, topological protection and the ultralow current density needed to displace it. The applications, however, are currently limited by its narrow phase diagram and the skyrmion Hall effect which prevents the skyrmion motion at high speed. In this work, we study the Dzyaloshinskii-Moriya interaction induced magnetic skyrmion that exchange coupled with magnetic nano-disks utilizing the micromagnetic simulation. We find that the stability and the skyrmion Hall effect of the created skyrmion can be tuned effectively with the coupling strength, thus opens the space to optimize the performance of the skyrmion based devices.

  6. Water hammer prediction and control: the Green's function method

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi

    2012-04-01

    By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.

  7. Complex Systems Simulation and Optimization | Computational Science | NREL

    Science.gov Websites

    account. Stochastic Optimization and Control: Formulation and implementation of advanced optimization and account uncertainty. Contact Wesley Jones Group Manager, Complex Systems Simulation and Optimiziation

  8. The simulation study on optical target laser active detection performance

    NASA Astrophysics Data System (ADS)

    Li, Ying-chun; Hou, Zhao-fei; Fan, Youchen

    2014-12-01

    According to the working principle of laser active detection system, the paper establishes the optical target laser active detection simulation system, carry out the simulation study on the detection process and detection performance of the system. For instance, the performance model such as the laser emitting, the laser propagation in the atmosphere, the reflection of optical target, the receiver detection system, the signal processing and recognition. We focus on the analysis and modeling the relationship between the laser emitting angle and defocus amount and "cat eye" effect echo laser in the reflection of optical target. Further, in the paper some performance index such as operating range, SNR and the probability of the system have been simulated. The parameters including laser emitting parameters, the reflection of the optical target and the laser propagation in the atmosphere which make a great influence on the performance of the optical target laser active detection system. Finally, using the object-oriented software design methods, the laser active detection system with the opening type, complete function and operating platform, realizes the process simulation that the detection system detect and recognize the optical target, complete the performance simulation of each subsystem, and generate the data report and the graph. It can make the laser active detection system performance models more intuitive because of the visible simulation process. The simulation data obtained from the system provide a reference to adjust the structure of the system parameters. And it provides theoretical and technical support for the top level design of the optical target laser active detection system and performance index optimization.

  9. Effects of Systematic Group Counseling on Work Adjustment Clients

    ERIC Educational Resources Information Center

    Roessler, Richard; And Others

    1977-01-01

    When compared with a group of clients who had received work adjustment services and a placebo treatment (personal hygiene training), experimental clients given Personal Achievement Skills (PAS) and work adjustment services reported greater gains on self-ratings of life perspective (optimism), work-related attitudes, and goal attainment. (Author)

  10. Development of a Numerical Method for Patient-Specific Cerebral Circulation Using 1D-0D Simulation of the Entire Cardiovascular System with SPECT Data.

    PubMed

    Zhang, Hao; Fujiwara, Naoya; Kobayashi, Masaharu; Yamada, Shigeki; Liang, Fuyou; Takagi, Shu; Oshima, Marie

    2016-08-01

    The detailed flow information in the circle of Willis (CoW) can facilitate a better understanding of disease progression, and provide useful references for disease treatment. We have been developing a one-dimensional-zero-dimensional (1D-0D) simulation method for the entire cardiovascular system to obtain hemodynamics information in the CoW. This paper presents a new method for applying 1D-0D simulation to an individual patient using patient-specific data. The key issue is how to adjust the deviation of physiological parameters, such as peripheral resistance, from literature data when patient-specific geometry is used. In order to overcome this problem, we utilized flow information from single photon emission computed tomography (SPECT) data. A numerical method was developed to optimize physiological parameters by adjusting peripheral cerebral resistance to minimize the difference between the resulting flow rate and the SPECT data in the efferent arteries of the CoW. The method was applied to three cases using different sets of patient-specific data in order to investigate the hemodynamics of the CoW. The resulting flow rates in the afferent arteries were compared to those of the phase-contrast magnetic resonance angiography (PC-MRA) data. Utilization of the SPECT data combined with the PC-MRA data showed a good agreement in flow rates in the afferent arteries of the CoW with those of PC-MRA data for all three cases. The results also demonstrated that application of SPECT data alone could provide the information on the ratios of flow distributions among arteries in the CoW.

  11. Application of Physiologically-Based Pharmacokinetic Modeling for the Prediction of Tofacitinib Exposure in Japanese.

    PubMed

    Suzuki, Misaki; Tse, Susanna; Hirai, Midori; Kurebayashi, Yoichi

    2017-05-09

    Tofacitinib (3-[(3R,4R)-4-methyl-3-[methyl(7H-pyrrolo[2,3-d]pyrimidin-4-yl)amino]piperidin-1-yl]-3 -oxopropanenitrile) is an oral Janus kinase inhibitor that is approved in countries including Japan and the United States for the treatment of rheumatoid arthritis, and is being developed across the globe for the treatment of inflammatory diseases. In the present study, a physiologically-based pharmacokinetic model was applied to compare the pharmacokinetics of tofacitinib in Japanese and Caucasians to assess the potential impact of ethnicity on the dosing regimen in the two populations. Simulated plasma concentration profiles and pharmacokinetic parameters, i.e. maximum concentration and area under plasma concentration-time curve, in Japanese and Caucasian populations after single or multiple doses of 1 to 30 mg tofacitinib were in agreement with clinically observed data. The similarity in simulated exposure between Japanese and Caucasian populations supports the currently approved dosing regimen in Japan and the United States, where there is no recommendation for dose adjustment according to race. Simulated results for single (1 to 100 mg) or multiple doses (5 mg twice daily) of tofacitinib in extensive and poor metabolizers of CYP2C19, an enzyme which has been shown to contribute in part to tofacitinib elimination and is known to exhibit higher frequency in Japanese compared to Caucasians, were also in support of no recommendation for dose adjustment in CYP2C19 poor metabolizers. This study demonstrated a successful application of physiologically-based pharmacokinetic modeling in evaluating ethnic sensitivity in pharmacokinetics at early stages of development, presenting its potential value as an efficient and scientific method for optimal dose setting in the Japanese population.

  12. Application of Physiologically-Based Pharmacokinetic Modeling for the Prediction of Tofacitinib Exposure in Japanese

    PubMed Central

    SUZUKI, MISAKI; TSE, SUSANNA; HIRAI, MIDORI; KUREBAYASHI, YOICHI

    2016-01-01

    Tofacitinib (3-[(3R,4R)-4-methyl-3-[methyl(7H-pyrrolo[2,3-d]pyrimidin-4-yl)amino]piperidin-1-yl]-3 -oxopropanenitrile) is an oral Janus kinase inhibitor that is approved in countries including Japan and the United States for the treatment of rheumatoid arthritis, and is being developed across the globe for the treatment of inflammatory diseases. In the present study, a physiologically-based pharmacokinetic model was applied to compare the pharmacokinetics of tofacitinib in Japanese and Caucasians to assess the potential impact of ethnicity on the dosing regimen in the two populations. Simulated plasma concentration profiles and pharmacokinetic parameters, i.e. maximum concentration and area under plasma concentration-time curve, in Japanese and Caucasian populations after single or multiple doses of 1 to 30 mg tofacitinib were in agreement with clinically observed data. The similarity in simulated exposure between Japanese and Caucasian populations supports the currently approved dosing regimen in Japan and the United States, where there is no recommendation for dose adjustment according to race. Simulated results for single (1 to 100 mg) or multiple doses (5 mg twice daily) of tofacitinib in extensive and poor metabolizers of CYP2C19, an enzyme which has been shown to contribute in part to tofacitinib elimination and is known to exhibit higher frequency in Japanese compared to Caucasians, were also in support of no recommendation for dose adjustment in CYP2C19 poor metabolizers. This study demonstrated a successful application of physiologically-based pharmacokinetic modeling in evaluating ethnic sensitivity in pharmacokinetics at early stages of development, presenting its potential value as an efficient and scientific method for optimal dose setting in the Japanese population. PMID:28490712

  13. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  14. Scale-up on basis of structured mixing models: A new concept.

    PubMed

    Mayr, B; Moser, A; Nagy, E; Horvat, P

    1994-02-05

    A new scale-up concept based upon mixing models for bioreactors equipped with Rushton turbines using the tanks-in-series concept is presented. The physical mixing model includes four adjustable parameters, i.e., radial and axial circulation time, number of ideally mixed elements in one cascade, and the volume of the ideally mixed turbine region. The values of the model parameters were adjusted with the application of a modified Monte-Carlo optimization method, which fitted the simulated response function to the experimental curve. The number of cascade elements turned out to be constant (N = 4). The model parameter radial circulation time is in good agreement with the one obtained by the pumping capacity. In case of remaining parameters a first or second order formal equation was developed, including four operational parameters (stirring and aeration intensity, scale, viscosity). This concept can be extended to several other types of bioreactors as well, and it seems to be a suitable tool to compare the bioprocess performance of different types of bioreactors. (c) 1994 John Wiley & Sons, Inc.

  15. An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor

    NASA Astrophysics Data System (ADS)

    Do, Q. B.; Choi, H.; Roh, G. H.

    2006-10-01

    This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation

  16. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    NASA Astrophysics Data System (ADS)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  17. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  18. Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator

    PubMed Central

    Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu

    2011-01-01

    Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure. PMID:20734069

  19. Validation of a novel laparoscopic adjustable gastric band simulator.

    PubMed

    Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu

    2011-04-01

    Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p < 0.001) (construct validity). Mean score for utility of the simulator, as addressed by the expert group, was 4.50 ± 0.71 (content validity). We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.

  20. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  1. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-11-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  2. Combining Simulation and Optimization Models for Hardwood Lumber Production

    Treesearch

    G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman

    1991-01-01

    Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...

  3. A Simulation of Alternatives for Wholesale Inventory Replenishment

    DTIC Science & Technology

    2016-03-01

    algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item

  4. Optimized constants for an ultraviolet light-adjustable intraocular lens.

    PubMed

    Conrad-Hengerer, Ina; Dick, H Burkhard; Hütz, Werner W; Haigis, Wolfgang; Hengerer, Fritz H

    2011-12-01

    To determine the accuracy of intraocular lens (IOL) power calculations and to suggest adjusted constants for implantation of ultraviolet light-adjustable IOLs. Center for Vision Science, Ruhr University Eye Clinic, Bochum, Germany. Cohort study. Eyes with a visually significant cataract that had phacoemulsification with implantation of a light-adjustable IOL were evaluated. IOLMaster measurements were performed before phacoemulsification and IOL implantation and 4 weeks after surgery before the first adjustment of the IOL. The difference in the expected refraction and estimation error was studied. The study evaluated 125 eyes. Using the surgical constants provided by the manufacturer of the light-adjustable IOL, the SRK/T formula gave a more hyperopic refraction than the Hoffer Q and Holladay 1 formulas. The mean error of prediction was 0.93 diopter (D) ± 0.69 (SD), 0.91 ± 0.63 D, and 0.86 ± 0.65 D, respectively. The corresponding mean absolute error of prediction was 0.98 ± 0.61 D, 0.93 ± 0.61 D, and 0.90 ± 0.59 D, respectively. With optimized constants for the formulas, the mean error of prediction was 0.00 ± 0.63 D for Hoffer Q, 0.00 ± 0.64 D for Holladay 1, and 0.00 ± 0.66 D for SRK/T. The expected refraction after phacoemulsification and implantation of a light-adjustable IOL toward the hyperopic side of the desired refraction could be considered when using the optimized constants for all formulas. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  5. Artificial Hip Simulator with Crystal Models

    NASA Image and Video Library

    1966-06-21

    Robert Johnson, top, sets the lubricant flow while Donald Buckley adjusts the bearing specimen on an artificial hip simulator at the National Aeronautics and Space Administration (NASA) Lewis Research Center. The simulator was supplemented by large crystal lattice models to demonstrate the composition of different bearing alloys. This this image by NASA photographer Paul Riedel was used for the cover of the August 15, 1966 edition of McGraw-Hill Product Engineering. Johnson was chief of Lubrication Branch and Buckley head of the Space Environment Lubrication Section in the Fluid System Components Division. In 1962 they began studying the molecular structure of metals. Their friction and wear testing revealed that the optimal structure for metal bearings was a hexagonal crystal structure with proper molecular space. Bearing manufacturers traditionally preferred cubic structures over hexagonal arrangements. Buckley and Johnson found that even though the hexagonal structural was not as inherently strong as its cubic counterpart, it was less likely to cause a catastrophic failure. The Lewis researchers concentrated their efforts on cobalt-molybdenum and titanium alloys for high temperatures applications. The alloys had a number of possible uses, included prosthetics. The alloys were similar in composition to the commercial alloys used for prosthetics, but employed the longer lasting hexagonal structure.

  6. Transcostal high-intensity focused ultrasound treatment using phased array with geometric correction.

    PubMed

    Qiao, Shan; Shen, Guofeng; Bai, Jingfeng; Chen, Yazhu

    2013-08-01

    In the high-intensity focused ultrasound treatment of liver tumors, ultrasound propagation is affected by the rib cage. Because of the diffraction and absorption of the bone, the sound distribution at the focal plane is altered, and more importantly, overheating on the rib surface might occur. To overcome these problems, a geometric correction method is applied to turn off the elements blocked by the ribs. The potential of steering the focus of the phased-array along the propagation direction to improve the transcostal treatment was investigated by simulations and experiments using different rib models and transducers. The ultrasound propagation through the ribs was computed by a hybrid method including the Rayleigh-Sommerfeld integral, k-space method, and angular spectrum method. A modified correction method was proposed to adjust the output of elements based on their relative area in the projected "shadow" of the ribs. The simulation results showed that an increase in the specific absorption rate gain up to 300% was obtained by varying the focal length although the optimal value varied in each situation. Therefore, acoustic simulation is required for each clinical case to determine a satisfactory treatment plan.

  7. a Simulation Tool Assisting the Design of a Close Range Photogrammetry System for the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Buffa, F.; Pinna, A.; Sanna, G.

    2016-06-01

    The Sardinia Radio Telescope (SRT) is a 64 m diameter antenna, whose primary mirror is equipped with an active surface capable to correct its deformations by means of a thick network of actuators. Close range photogrammetry (CRP) was used to measure the self-load deformations of the SRT primary reflector from its optimal shape, which are requested to be minimized for the radio telescope to operate at full efficiency. In the attempt to achieve such performance, we conceived a near real-time CRP system which requires the cameras to be installed in fixed positions and at the same time to avoid any interference with the antenna operativeness. The design of such system is not a trivial task, and to assist our decision we therefore developed a simulation pipeline to realistically reproduce and evaluate photogrammetric surveys of large structures. The described simulation environment consists of (i) a detailed description of the SRT model, included the measurement points and the camera parameters, (ii) a tool capable of generating realistic images accordingly to the above model, and (iii) a self-calibrating bundle adjustment to evaluate the performance in terms of RMSE of the camera configurations.

  8. A Novel Temporal Bone Simulation Model Using 3D Printing Techniques.

    PubMed

    Mowry, Sarah E; Jammal, Hachem; Myer, Charles; Solares, Clementino Arturo; Weinberger, Paul

    2015-09-01

    An inexpensive temporal bone model for use in a temporal bone dissection laboratory setting can be made using a commercially available, consumer-grade 3D printer. Several models for a simulated temporal bone have been described but use commercial-grade printers and materials to produce these models. The goal of this project was to produce a plastic simulated temporal bone on an inexpensive 3D printer that recreates the visual and haptic experience associated with drilling a human temporal bone. Images from a high-resolution CT of a normal temporal bone were converted into stereolithography files via commercially available software, with image conversion and print settings adjusted to achieve optimal print quality. The temporal bone model was printed using acrylonitrile butadiene styrene (ABS) plastic filament on a MakerBot 2x 3D printer. Simulated temporal bones were drilled by seven expert temporal bone surgeons, assessing the fidelity of the model as compared with a human cadaveric temporal bone. Using a four-point scale, the simulated bones were assessed for haptic experience and recreation of the temporal bone anatomy. The created model was felt to be an accurate representation of a human temporal bone. All raters felt strongly this would be a good training model for junior residents or to simulate difficult surgical anatomy. Material cost for each model was $1.92. A realistic, inexpensive, and easily reproducible temporal bone model can be created on a consumer-grade desktop 3D printer.

  9. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  10. 10 CFR 430.24 - Units to be tested.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the method includes an ARM/simulation adjustment factor(s), determine the value(s) of the factors(s... process. (v) If request for approval is for an updated ARM, manufacturers must identify modifications made to the ARM since the last submittal, including any ARM/simulation adjustment factor(s) added since...

  11. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  12. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  13. Noninvasive, automatic optimization strategy in cardiac resynchronization therapy.

    PubMed

    Reumann, Matthias; Osswald, Brigitte; Doessel, Olaf

    2007-07-01

    Optimization of cardiac resynchronization therapy (CRT) is still unsolved. It has been shown that optimal electrode position,atrioventricular (AV) and interventricular (VV) delays improve the success of CRT and reduce the number of non-responders. However, no automatic, noninvasive optimization strategy exists to date. Cardiac resynchronization therapy was simulated on the Visible Man and a patient data-set including fiber orientation and ventricular heterogeneity. A cellular automaton was used for fast computation of ventricular excitation. An AV block and a left bundle branch block were simulated with 100%, 80% and 60% interventricular conduction velocity. A right apical and 12 left ventricular lead positions were set. Sequential optimization and optimization with the downhill simplex algorithm (DSA) were carried out. The minimal error between isochrones of the physiologic excitation and the therapy was computed automatically and leads to an optimal lead position and timing. Up to 1512 simulations were carried out per pathology per patient. One simulation took 4 minutes on an Apple Macintosh 2 GHz PowerPC G5. For each electrode pair an optimal pacemaker delay was found. The DSA reduced the number of simulations by an order of magnitude and the AV-delay and VV - delay were determined with a much higher resolution. The findings are well comparable with clinical studies. The presented computer model of CRT automatically evaluates an optimal lead position and AV-delay and VV-delay, which can be used to noninvasively plan an optimal therapy for an individual patient. The application of the DSA reduces the simulation time so that the strategy is suitable for pre-operative planning in clinical routine. Future work will focus on clinical evaluation of the computer models and integration of patient data for individualized therapy planning and optimization.

  14. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  15. Predicting shrinkage and warpage in injection molding: Towards automatized mold design

    NASA Astrophysics Data System (ADS)

    Zwicke, Florian; Behr, Marek; Elgeti, Stefanie

    2017-10-01

    It is an inevitable part of any plastics molding process that the material undergoes some shrinkage during solidification. Mainly due to unavoidable inhomogeneities in the cooling process, the overall shrinkage cannot be assumed as homogeneous in all volumetric directions. The direct consequence is warpage. The accurate prediction of such shrinkage and warpage effects has been the subject of a considerable amount of research, but it is important to note that this behavior depends greatly on the type of material that is used as well as the process details. Without limiting ourselves to any specific properties of certain materials or process designs, we aim to develop a method for the automatized design of a mold cavity that will produce correctly shaped moldings after solidification. Essentially, this can be stated as a shape optimization problem, where the cavity shape is optimized to fulfill some objective function that measures defects in the molding shape. In order to be able to develop and evaluate such a method, we first require simulation methods for the diffierent steps involved in the injection molding process that can represent the phenomena responsible for shrinkage and warpage ina sufficiently accurate manner. As a starting point, we consider the solidification of purely amorphous materials. In this case, the material slowly transitions from fluid-like to solid-like behavior as it cools down. This behavior is modeled using adjusted viscoelastic material models. Once the material has passed a certain temperature threshold during cooling, any viscous effects are neglected and the behavior is assumed to be fully elastic. Non-linear elastic laws are used to predict shrinkage and warpage that occur after this point. We will present the current state of these simulation methods and show some first approaches towards optimizing the mold cavity shape based on these methods.

  16. Optimization of Dish Solar Collectors with and without Secondary Concentrators

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1982-01-01

    Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high.

  17. Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.

    PubMed

    Lorenz, Normann

    2017-06-01

    In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Challenges of NDE simulation tool validation, optimization, and utilization for composites

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter

    2016-02-01

    Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.

  19. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  20. Analysis of adjusting effects of mounting force on frequency conversion of mounted nonlinear optics.

    PubMed

    Su, Ruifeng; Liu, Haitao; Liang, Yingchun; Lu, Lihua

    2014-01-10

    Motivated by the need to increase the second harmonic generation (SHG) efficiency of nonlinear optics with large apertures, a novel mounting configuration with active adjusting function on the SHG efficiency is proposed and mechanically and optically studied. The adjusting effects of the mounting force on the distortion and stress are analyzed by the finite element methods (FEM), as well as the contribution of the distortion and stress to the change in phase mismatch, and the SHG efficiency are theoretically stated. Further on, the SHG efficiency is calculated as a function of the mounting force. The changing trends of the distortion, stress, and the SHG efficiency with the varying mounting force are obtained, and the optimal ones are figured out. Moreover, the mechanism of the occurrence of the optimal values is studied and the adjusting strategy is put forward. Numerical results show the robust adjustment of the mounting force, as well as the effectiveness of the mounting configuration, in increasing the SHG efficiency.

  1. Optimization of Collision Detection in Surgical Simulations

    NASA Astrophysics Data System (ADS)

    Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu

    2014-11-01

    Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality

  2. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  3. Simulation-based optimization of lattice support structures for offshore wind energy converters with the simultaneous perturbation algorithm

    NASA Astrophysics Data System (ADS)

    Molde, H.; Zwick, D.; Muskulus, M.

    2014-12-01

    Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.

  4. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  5. Design of a correlated validated CFD and genetic algorithm model for optimized sensors placement for indoor air quality monitoring

    NASA Astrophysics Data System (ADS)

    Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza

    2018-02-01

    In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.

  6. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    PubMed

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Postaudit of optimal conjunctive use policies

    USGS Publications Warehouse

    Nishikawa, Tracy; Martin, Peter; ,

    1998-01-01

    A simulation-optimization model was developed for the optimal management of the city of Santa Barbara's water resources during a drought; however, this model addressed only groundwater flow and not the advective-dispersive, density-dependent transport of seawater. Zero-m freshwater head constraints at the coastal boundary were used as surrogates for the control of seawater intrusion. In this study, the strategies derived from the simulation-optimization model using two surface water supply scenarios are evaluated using a two-dimensional, density-dependent groundwater flow and transport model. Comparisons of simulated chloride mass fractions are made between maintaining the actual pumping policies of the 1987-91 drought and implementing the optimal pumping strategies for each scenario. The results indicate that using 0-m freshwater head constraints allowed no more seawater intrusion than under actual 1987-91 drought conditions and that the simulation-optimization model yields least-cost strategies that deliver more water than under actual drought conditions while controlling seawater intrusion.

  8. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  9. Adaptive optimal training of animal behavior

    NASA Astrophysics Data System (ADS)

    Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan

    Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.

  10. [Conceptual approach to formation of a modern system of medical provision].

    PubMed

    Belevitin, A B; Miroshnichenko, Iu V; Bunin, S A; Goriachev, A B; Krasavin, K D

    2009-09-01

    Within the frame of forming of a new face of medical service of the Armed Forces, were determined the principle approaches to optimization of the process of development of the system of medical supply. It was proposed to use the following principles: principle of hierarchic structuring, principle of purposeful orientation, principle of vertical task sharing, principle of horizontal task sharing, principle of complex simulation, principle of permanent perfection. The main direction of optimization of structure and composition of system of medical supply of the Armed Forces are: forming of modern institutes of medical supply--centers of support by technique and facilities on the base of central, regional storehouses, and attachment of several functions of organs of military government to them; creation of medical supply office on the base military hospitals, being basing treatment-prophylaxis institutes, in adjusted territorial zones of responsibility for the purpose of realization of complex of tasks of supplying the units and institutes, attached to them on medical support, by medical equipment. Building of medical support system is realized on three levels: Center - Military region (NAVY region) - territorial zone of responsibility.

  11. Spatiotemporal topology and temporal sequence identification with an adaptive time-delay neural network

    NASA Astrophysics Data System (ADS)

    Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.

    1993-08-01

    Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.

  12. Assessing the Optimal Position for Vedolizumab in the Treatment of Ulcerative Colitis: A Simulation Model.

    PubMed

    Scott, Frank I; Shah, Yash; Lasch, Karen; Luo, Michelle; Lewis, James D

    2018-01-18

    Vedolizumab, an α4β7 integrin monoclonal antibody inhibiting gut lymphocyte trafficking, is an effective treatment for ulcerative colitis (UC). We evaluated the optimal position of vedolizumab in the UC treatment paradigm. Using Markov modeling, we assessed multiple algorithms for the treatment of UC. The base case was a 35-year-old male with steroid-dependent moderately to severely active UC without previous immunomodulator or biologic use. The model included 4 different algorithms over 1 year, with vedolizumab use prior to: initiating azathioprine (Algorithm 1), combination therapy with infliximab and azathioprine (Algorithm 2), combination therapy with an alternative anti-tumor necrosis factor (anti-TNF) and azathioprine (Algorithm 3), and colectomy (Algorithm 4). Transition probabilities and quality-adjusted life-year (QALY) estimates were derived from the published literature. Primary analyses included simulating 100 trials of 100,000 individuals, assessing clinical outcomes, and QALYs. Sensitivity analyses employed longer time horizons and ranges for all variables. Algorithm 1 (vedolizumab use prior to all other therapies) was the preferred strategy, resulting in 8981 additional individuals in remission, 18 fewer cases of lymphoma, and 1087 fewer serious infections per 100,000 patients compared with last-line use (A4). Algorithm 1 also resulted in 0.0197 to 0.0205 more QALYs compared with other algorithms. This benefit increased with longer time horizons. Algorithm 1 was preferred in all sensitivity analyses. The model suggests that treatment algorithms positioning vedolizumab prior to other therapies should be considered for individuals with moderately to severely active steroid-dependent UC. Further prospective research is needed to confirm these simulated results. © 2018 Crohn’s & Colitis Foundation of America. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  13. Miniature Microwave Applicator for Murine Bladder Hyperthermia Studies

    PubMed Central

    Salahi, Sara; Maccarini, Paolo F.; Rodrigues, Dario B.; Etienne, Wiguins; Landon, Chelsea D.; Inman, Brant A.; Dewhirst, Mark W.; Stauffer, Paul R.

    2012-01-01

    Purpose Novel combinations of heat with chemotherapeutic agents are often studied in murine tumor models. Currently, no device exists to selectively heat small tumors at depth in mice. In this project, we modelled, built and tested a miniature microwave heat applicator, the physical dimensions of which can be scaled to adjust the volume and depth of heating to focus on the tumor volume. Of particular interest is a device that can selectively heat murine bladder. Materials and Methods Using Avizo® segmentation software, we created a numerical mouse model based on micro-MRI scan data. The model was imported into HFSS™ simulation software and parametric studies were performed to optimize the dimensions of a water-loaded circular waveguide for selective power deposition inside a 0.15ml bladder. A working prototype was constructed operating at 2.45GHz. Heating performance was characterized by mapping fiber-optic temperature sensors along catheters inserted at depths of 0-1mm (subcutaneous), 2-3mm (vaginal), and 4-5mm (rectal) below the abdominal wall, with the mid-depth catheter adjacent to the bladder. Core temperature was monitored orally. Results Thermal measurements confirm the simulations which demonstrate that this applicator can provide local heating at depth in small animals. Measured temperatures in murine pelvis show well-localized bladder heating to 42-43°C while maintaining normothermic skin and core temperatures. Conclusions Simulation techniques facilitate the design optimization of microwave antennas for use in pre-clinical applications such as localized tumor heating in small animals. Laboratory measurements demonstrate the effectiveness of a new miniature water-coupled microwave applicator for localized heating of murine bladder. PMID:22690856

  14. Post Pareto optimization-A case

    NASA Astrophysics Data System (ADS)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  15. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  16. Field development planning using simulated annealing - optimal economic well scheduling and placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckner, B.L.; Xong, X.

    1995-12-31

    A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less

  17. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  18. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  19. A comparison of automated dispensing cabinet optimization methods.

    PubMed

    O'Neil, Daniel P; Miller, Adam; Cronin, Daniel; Hatfield, Chad J

    2016-07-01

    Results of a study comparing two methods of optimizing automated dispensing cabinets (ADCs) are reported. Eight nonprofiled ADCs were optimized over six months. Optimization of each cabinet involved three steps: (1) removal of medications that had not been dispensed for at least 180 days, (2) movement of ADC stock to better suit end-user needs and available space, and (3) adjustment of par levels (desired on-hand inventory levels). The par levels of four ADCs (the Day Supply group) were adjusted according to average daily usage; the par levels of the other four ADCs (the Formula group) were adjusted using a standard inventory formula. The primary outcome was the vend:fill ratio, while secondary outcomes included total inventory, inventory cost, quantity of expired medications, and ADC stockout percentage. The total number of medications stocked in the eight machines was reduced from 1,273 in a designated two-month preoptimization period to 1,182 in a designated two-month postoptimization period, yielding a carrying cost savings of $44,981. The mean vend:fill ratios before and after optimization were 4.43 and 4.46, respectively. The vend:fill ratio for ADCs in the Formula group increased from 4.33 before optimization to 5.2 after optimization; in the Day Supply group, the ratio declined (from 4.52 to 3.90). The postoptimization interaction difference between the Formula and Day Supply groups was found to be significant (p = 0.0477). ADC optimization via a standard inventory formula had a positive impact on inventory costs, refills, vend:fill ratios, and stockout percentages. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  20. Sleep and Adjustment in Preschool Children: Sleep Diary Reports by Mothers Relate to Behavior Reports by Teachers.

    ERIC Educational Resources Information Center

    Bates, John E.; Viken, Richard J.; Alexander, Douglas B.; Beyers, Jennifer; Stockton, Lesley

    2002-01-01

    Investigated the relationship between sleep patterns and behavioral adjustment with 4- to 5-year-old children from low-income families. Found that disrupted child sleep patterns, including variability in parentally reported amount of sleep, variability in bedtime, and lateness of bedtime, predicted less optimal adjustment in preschool, even after…

  1. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  2. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  3. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  4. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  5. Application of an Evolution Strategy in Planetary Ephemeris Optimization

    NASA Astrophysics Data System (ADS)

    Mai, E.

    2016-12-01

    Classical planetary ephemeris construction comprises three major steps, which are performed iteratively: simultaneous numerical integration of coupled equations of motion of a multi-body system (propagator step), reduction of thousands of observations (reduction step), and optimization of various selected model parameters (adjustment step). This traditional approach is challenged by ongoing refinements in force modeling, e.g. inclusion of much more significant minor bodies, an ever-growing number of planetary observations, e.g. vast amount of spacecraft tracking data, etc. To master the high computational burden and in order to circumvent the need for inversion of huge normal equation matrices, we propose an alternative ephemeris construction method. The main idea is to solve the overall optimization problem by a straightforward direct evaluation of the whole set of mathematical formulas involved, rather than to solve it as an inverse problem with all its tacit mathematical assumptions and numerical difficulties. We replace the usual gradient search by a stochastic search, namely an evolution strategy, the latter of which is also perfect for the exploitation of parallel computing capabilities. Furthermore, this new approach enables multi-criteria optimization and time-varying optima. This issue will become important in future once ephemeris construction is just one part of even larger optimization problems, e.g. the combined and consistent determination of the physical state (orbit, size, shape, rotation, gravity,…) of celestial bodies (planets, satellites, asteroids, or comets), and if one seeks near real-time solutions. Here we outline the general idea and discuss first results. As an example, we present a simultaneous optimization of high-correlated asteroidal ring model parameters (total mass and heliocentric radius), based on simulations.

  6. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  7. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    PubMed

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  8. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  9. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  10. Mixed Criticality Scheduling for Industrial Wireless Sensor Networks

    PubMed Central

    Jin, Xi; Xia, Changqing; Xu, Huiting; Wang, Jintao; Zeng, Peng

    2016-01-01

    Wireless sensor networks (WSNs) have been widely used in industrial systems. Their real-time performance and reliability are fundamental to industrial production. Many works have studied the two aspects, but only focus on single criticality WSNs. Mixed criticality requirements exist in many advanced applications in which different data flows have different levels of importance (or criticality). In this paper, first, we propose a scheduling algorithm, which guarantees the real-time performance and reliability requirements of data flows with different levels of criticality. The algorithm supports centralized optimization and adaptive adjustment. It is able to improve both the scheduling performance and flexibility. Then, we provide the schedulability test through rigorous theoretical analysis. We conduct extensive simulations, and the results demonstrate that the proposed scheduling algorithm and analysis significantly outperform existing ones. PMID:27589741

  11. Research on the Complexity of Dual-Channel Supply Chain Model in Competitive Retailing Service Market

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Li, Ting; Ren, Wenbo

    2017-06-01

    This paper examines the optimal decisions of dual-channel game model considering the inputs of retailing service. We analyze how adjustment speed of service inputs affect the system complexity and market performance, and explore the stability of the equilibrium points by parameter basin diagrams. And chaos control is realized by variable feedback method. The numerical simulation shows that complex behavior would trigger the system to become unstable, such as double period bifurcation and chaos. We measure the performances of the model in different periods by analyzing the variation of average profit index. The theoretical results show that the percentage share of the demand and cross-service coefficients have important influence on the stability of the system and its feasible basin of attraction.

  12. Cooperative Position Aware Mobility Pattern of AUVs for Avoiding Void Zones in Underwater WSNs.

    PubMed

    Javaid, Nadeem; Ejaz, Mudassir; Abdul, Wadood; Alamri, Atif; Almogren, Ahmad; Niaz, Iftikhar Azim; Guizani, Nadra

    2017-03-13

    In this paper, we propose two schemes; position-aware mobility pattern (PAMP) and cooperative PAMP (Co PAMP). The first one is an optimization scheme that avoids void hole occurrence and minimizes the uncertainty in the position estimation of glider's. The second one is a cooperative routing scheme that reduces the packet drop ratio by using the relay cooperation. Both techniques use gliders that stay at sojourn positions for a predefined time, at sojourn position self-confidence (s-confidence) and neighbor-confidence (n-confidence) regions that are estimated for balanced energy consumption. The transmission power of a glider is adjusted according to those confidence regions. Simulation results show that our proposed schemes outperform the compared existing one in terms of packet delivery ratio, void zones and energy consumption.

  13. Slow light effect with high group index and wideband by saddle-like mode in PC-CROW

    NASA Astrophysics Data System (ADS)

    Wan, Yong; Jiang, Li-Jun; Xu, Sheng; Li, Meng-Xue; Liu, Meng-Nan; Jiang, Cheng-Yi; Yuan, Feng

    2018-04-01

    Slow light with high group index and wideband is achieved in photonic crystal coupled-resonator optical waveguides (PC-CROWs). According to the eye-shaped scatterers and various microcavities, saddle-like curves between the normalized frequency f and wave number k can be obtained by adjusting the parameters of the scatterers, parameters of the coupling microcavities, and positions of the scatterers. Slow light with decent flat band and group index can then be achieved by optimizing the parameters. Simulations prove that the maximal value of the group index is > 104, and the normalized delay bandwidth product within a new varying range of n g > 102 or n g > 103 can be a new and effective criterion of evaluation for the slow light in PC-CROWs.

  14. Design and Dynamic Modeling of Flexible Rehabilitation Mechanical Glove

    NASA Astrophysics Data System (ADS)

    Lin, M. X.; Ma, G. Y.; Liu, F. Q.; Sun, Q. S.; Song, A. Q.

    2018-03-01

    Rehabilitation gloves are equipment that helps rehabilitation doctors perform finger rehabilitation training, which can greatly reduce the labour intensity of rehabilitation doctors and make more people receive finger rehabilitation training. In the light of the defects of the existing rehabilitation gloves such as complicated structure and stiff movement, a rehabilitation mechanical glove is designed, which provides driving force by using the air cylinder and adopts a rope-spring mechanism to ensure the flexibility of the movement. In order to fit the size of different hands, the bandage ring which can adjust size is used to make the mechanism fixed. In the interest of solve the complex problem of dynamic equation, dynamic simulation is carried out by using Adams to obtain the motion curve, which is easy to optimize the structure of ring position.

  15. Load leveling on industrial refrigeration systems

    NASA Astrophysics Data System (ADS)

    Bierenbaum, H. S.; Kraus, A. D.

    1982-01-01

    A computer model was constructed of a brewery with a 2000 horsepower compressor/refrigeration system. The various conservation and load management options were simulated using the validated model. The savings available for implementing the most promising options were verified by trials in the brewery. Result show that an optimized methodology for implementing load leveling and energy conservation consisted of: (1) adjusting (or tuning) refrigeration systems controller variables to minimize unnecessary compressor starts, (2) The primary refrigeration system operating parameters, compressor suction pressure, and discharge pressure are carefully controlled (modulated) to satisfy product quality constraints (as well as in-process material cooling rates and temperature levels) and energy evaluating the energy cost savings associated with reject heat recovery, and (4) a decision is made to implement the reject heat recovery system based on a cost/benefits analysis.

  16. Evaluation of traffic signal timing optimization methods using a stochastic and microscopic simulation program.

    DOT National Transportation Integrated Search

    2003-01-01

    This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...

  17. Modularized Parallel Neutron Instrument Simulation on the TeraGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Meili; Cobb, John W; Hagen, Mark E

    2007-01-01

    In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serialmore » instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evalua-tion, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new fea-tures seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented.« less

  18. EMU Suit Performance Simulation

    NASA Technical Reports Server (NTRS)

    Cowley, Matthew S.; Benson, Elizabeth; Harvill, Lauren; Rajulu, Sudhakar

    2014-01-01

    Introduction: Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for research and development are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques that focus on a human-centric design paradigm. These new techniques make use of virtual prototype simulations and fully adjustable physical prototypes of suit hardware. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process. Objectives: The primary objective was to test modern simulation techniques for evaluating the human performance component of two EMU suit concepts, pivoted and planar style hard upper torso (HUT). Methods: This project simulated variations in EVA suit shoulder joint design and subject anthropometry and then measured the differences in shoulder mobility caused by the modifications. These estimations were compared to human-in-the-loop test data gathered during past suited testing using four subjects (two large males, two small females). Results: Results demonstrated that EVA suit modeling and simulation are feasible design tools for evaluating and optimizing suit design based on simulated performance. The suit simulation model was found to be advantageous in its ability to visually represent complex motions and volumetric reach zones in three dimensions, giving designers a faster and deeper comprehension of suit component performance vs. human performance. Suit models were able to discern differing movement capabilities between EMU HUT configurations, generic suit fit concerns, and specific suit fit concerns for crewmembers based on individual anthropometry

  19. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    ERIC Educational Resources Information Center

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  20. Microgrids and distributed generation systems: Control, operation, coordination and planning

    NASA Astrophysics Data System (ADS)

    Che, Liang

    Distributed Energy Resources (DERs) which include distributed generations (DGs), distributed energy storage systems, and adjustable loads are key components in microgrid operations. A microgrid is a small electric power system integrated with on-site DERs to serve all or some portion of the local load and connected to the utility grid through the point of common coupling (PCC). Microgrids can operate in both grid-connected mode and island mode. The structure and components of hierarchical control for a microgrid at Illinois Institute of Technology (IIT) are discussed and analyzed. Case studies would address the reliable and economic operation of IIT microgrid. The simulation results of IIT microgrid operation demonstrate that the hierarchical control and the coordination strategy of distributed energy resources (DERs) is an effective way of optimizing the economic operation and the reliability of microgrids. The benefits and challenges of DC microgrids are addressed with a DC model for the IIT microgrid. We presented the hierarchical control strategy including the primary, secondary, and tertiary controls for economic operation and the resilience of a DC microgrid. The simulation results verify that the proposed coordinated strategy is an effective way of ensuring the resilient response of DC microgrids to emergencies and optimizing their economic operation at steady state. The concept and prototype of a community microgrid that interconnecting multiple microgrids in a community are proposed. Two works are conducted. For the coordination, novel three-level hierarchical coordination strategy to coordinate the optimal power exchanges among neighboring microgrids is proposed. For the planning, a multi-microgrid interconnection planning framework using probabilistic minimal cut-set (MCS) based iterative methodology is proposed for enhancing the economic, resilience, and reliability signals in multi-microgrid operations. The implementation of high-reliability microgrids requires proper protection schemes that effectively function in both grid-connected and island modes. This chapter presents a communication-assisted four-level hierarchical protection strategy for high-reliability microgrids, and tests the proposed protection strategy based on a loop structured microgrid. The simulation results demonstrate the proposed strategy to be an effective and efficient option for microgrid protection. Additionally, microgrid topology ought to be optimally planned. To address the microgrid topology planning, a graph-partitioning and integer-programming integrated methodology is proposed. This work is not included in the dissertation. Interested readers can refer to our related publication.

  1. How to deal with climate change uncertainty in the planning of engineering systems

    NASA Astrophysics Data System (ADS)

    Spackova, Olga; Dittes, Beatrice; Straub, Daniel

    2016-04-01

    The effect of extreme events such as floods on the infrastructure and built environment is associated with significant uncertainties: These include the uncertain effect of climate change, uncertainty on extreme event frequency estimation due to limited historic data and imperfect models, and, not least, uncertainty on future socio-economic developments, which determine the damage potential. One option for dealing with these uncertainties is the use of adaptable (flexible) infrastructure that can easily be adjusted in the future without excessive costs. The challenge is in quantifying the value of adaptability and in finding the optimal sequence of decision. Is it worth to build a (potentially more expensive) adaptable system that can be adjusted in the future depending on the future conditions? Or is it more cost-effective to make a conservative design without counting with the possible future changes to the system? What is the optimal timing of the decision to build/adjust the system? We develop a quantitative decision-support framework for evaluation of alternative infrastructure designs under uncertainties, which: • probabilistically models the uncertain future (trough a Bayesian approach) • includes the adaptability of the systems (the costs of future changes) • takes into account the fact that future decisions will be made under uncertainty as well (using pre-posterior decision analysis) • allows to identify the optimal capacity and optimal timing to build/adjust the infrastructure. Application of the decision framework will be demonstrated on an example of flood mitigation planning in Bavaria.

  2. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  3. Optimizing Cognitive Load for Learning from Computer-Based Science Simulations

    ERIC Educational Resources Information Center

    Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.

    2006-01-01

    How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…

  4. Evaluating large-scale propensity score performance through real-world and synthetic data experiments.

    PubMed

    Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A

    2018-06-22

    Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.

  5. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.

  6. Stromal-epithelial dynamics in response to fractionated radiotherapy

    NASA Astrophysics Data System (ADS)

    Rong, Panying

    The speech of individuals with velopharyngeal incompetency (VPI) is characterized by hypernasality, a speech quality related to excessive emission of acoustic energy through the nose, as caused by failure of velopharyngeal closure. As an attempt to reduce hypernasality and, in turn, improve the quality of VPI-related hypernasal speech, this study is dedicated to developing an approach that uses speech-dependent articulatory adjustments to reduce hypernasality caused by excessive velopharyngeal opening. A preliminary study has been done to derive such articulatory adjustments for hypernasal /i/ vowels based on the simulation of an articulatorymodel (Speech Processing and Synthesis Toolboxes, Childers (2000)). Both nasal /i/ vowels with and without articulatory adjustments were synthesized by the model. Spectral analysis found that nasal acoustic features were attenuated and oral formant structures were restored after articulatory adjustments. In addition, comparisons of perceptual ratings of nasality between the two types of nasal vowels showed the articulatory adjustments generated by the model significantly reduced the perception of nasality for nasal /i/ vowels. Such articulatory adjustments for nasal /i/ have two patterns: 1) a consistent adjustment pattern, which corresponds an expansion at the velopharynx, and 2) some speech-dependent fine-tuning adjustment patterns, including adjustments in the lip area and the upper pharynx. The long-term goal of this study is to apply this approach of articulatory adjustment as a therapeutic tool in clinical speech treatment to detect and correct the maladaptive articulatory behaviors developed spontaneously by speakers with VPI on individual bases. This study constructed a speaker-adaptive articulatory model on the basis of the framework of Childers's vocal tract model to simulate articulatory adjustments aiming at compensating for the acoustic outcome caused by velopharyngeal opening and reducing nasality. To construct such a speaker-adaptive articulatory model, (1) an articulatory-acoustic-aerodynamic database was recorded using the articulography and aerodynamic instruments to provide point-wise articulatory data to be fitted into the framework of Childers's standard vocal tract model; (2) the length and transverse dimension of the vocal tract were adjusted to fit individual speaker by minimizing the acoustic discrepancy between the model simulation and the target derived from acoustic signal in the database using the simulated annealing algorithm; (3) the articulatory space of the model was adjusted to fit individual articulatory features by adapting the movement ranges of all articulators. With the speaker-adaptive articulatory model, the articulatory configurations of the oral and nasal vowels in the database were simulated and synthesized. Given the acoustic targets derived from the oral vowels in the database, speech-dependent articulatory adjustments were simulated to compensate for the acoustic outcome caused by VPO. The resultant articulatory configurations corresponds to nasal vowels with articulatory adjustment, which were synthesized to serve as the perceptual stimuli for a listening task of nasality rating. The oral and nasal vowels synthesized based on the oral and nasal vowel targets in the database also served as the perceptual stimuli. The results suggest both acoustic and perceptual effects of the mode-generated articulatory adjustment on the nasal vowels /a/, /i/ and /u/. In terms of acoustics, the articulatory adjustment (1) restores the altered formant structures due to nasal coupling, including shifted formant frequency, attenuated formant intensity and expanded formant bandwidth and (2) attenuates the peaks and zeros caused by nasal resonances. Perceptually, the articulatory adjustment generated by the speaker-adaptive model significantly reduces the perceived nasality for all three vowels (/a/, /i/, /u/). The acoustic and perceptual effects of articulatory adjustment suggest achievement of the acoustic goal of compensating for the acoustic discrepancy caused by VPO and the auditory goal of reducing the perception of nasality. Such a finding is consistent with motor equivalence (Hughes and Abbs, 1976; Maeda, 1990), which enables inter-articulator coordination to compensate for the deviation from the acoustic/auditory goal caused by the shifted position of an articulator. The articulatory adjustment responsible for the acoustic and perceptual effects as described above was decomposed into a set of empirical orthogonal modes (Story and Titze, 1998). Both gross articulatory patterns and fine-tuning adjustments were found in the principal orthogonal modes, which lead to the acoustic compensation and reduction of nasality. For /a/ and /i/, a direct relationship was found among the acoustic features, nasality, and articulatory adjustment patterns. Specifically, the articulatory adjustments indicated by the principal orthogonal modes of the adjusted nasal /a/ and /i/ were directly correlated with the attenuation of the acoustic cues of nasality (i.e., shifting of F1 and F2 frequencies) and the reduction of nasality rating. For /u/, such a direct relationship among the acoustic features, nasality and articulatory adjustment was not as prominent, suggesting the possibility of additional acoustic correlates of nasality other than F1 and F2. The findings of this study demonstrate the possibility of using articulatory adjustment to reduce the perception of nasality through model simulation. A speaker-adaptive articulatory model is able to simulate individual-based articulatory adjustment strategies that can be applied in clinical settings to serve as the articulatory targets for correction of the maladaptive articulatory behaviors developed spontaneously by speakers with hypernasal speech. Such a speaker-adaptive articulatory model provides an intuitive way of articulatory learning and self-training for speakers with VPI to learn appropriate articulatory strategies through model-speaker interaction.

  7. Prediction of the optimum surface orientation angles to achieve maximum solar radiation using Particle Swarm Optimization in Sabha City Libya

    NASA Astrophysics Data System (ADS)

    Mansour, F. A.; Nizam, M.; Anwar, M.

    2017-02-01

    This research aims to predict the optimum surface orientation angles in solar panel installation to achieve maximum solar radiation. Incident solar radiation is calculated using koronakis mathematical model. Particle Swarm Optimization (PSO) is used as computational method to find optimum angle orientation for solar panel installation in order to get maximum solar radiation. A series of simulation has been carried out to calculate solar radiation based on monthly, seasonally, semi-yearly and yearly period. South-facing was calculated also as comparison of proposed method. South-facing considers azimuth of 0°. Proposed method attains higher incident predictions than South-facing that recorded 2511.03 kWh/m2for monthly. It were about 2486.49 kWh/m2, 2482.13 kWh/m2and 2367.68 kWh/m2 for seasonally, semi-yearly and yearly. South-facing predicted approximately 2496.89 kWh/m2, 2472.40 kWh/m2, 2468.96 kWh/m2, 2356.09 kWh/m2for monthly, seasonally, semi-yearly and yearly periods respectively. Semi-yearly is the best choice because it needs twice adjustments of solar panel in a year. Yet it considers inefficient to adjust solar panel position in every season or monthly with no significant solar radiation increase than semi-yearly and solar tracking device still considers costly in solar energy system. PSO was able to predict accurately with simple concept, easy and computationally efficient. It has been proven by finding the best fitness faster.

  8. Automatic CT simulation optimization for radiation therapy: A general strategy.

    PubMed

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.

  9. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  10. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  11. Minimizing pre- and post-defibrillation pauses increases the likelihood of return of spontaneous circulation (ROSC).

    PubMed

    Sell, Rebecca E; Sarno, Renee; Lawrence, Brenna; Castillo, Edward M; Fisher, Roger; Brainard, Criss; Dunford, James V; Davis, Daniel P

    2010-07-01

    The three-phase model of ventricular fibrillation (VF) arrest suggests a period of compressions to "prime" the heart prior to defibrillation attempts. In addition, post-shock compressions may increase the likelihood of return of spontaneous circulation (ROSC). The optimal intervals for shock delivery following cessation of compressions (pre-shock interval) and resumption of compressions following a shock (post-shock interval) remain unclear. To define optimal pre- and post-defibrillation compression pauses for out-of-hospital cardiac arrest (OOHCA). All patients suffering OOHCA from VF were identified over a 1-month period. Defibrillator data were abstracted and analyzed using the combination of ECG, impedance, and audio recording. Receiver-operator curve (ROC) analysis was used to define the optimal pre- and post-shock compression intervals. Multiple logistic regression analysis was used to quantify the relationship between these intervals and ROSC. Covariates included cumulative number of defibrillation attempts, intubation status, and administration of epinephrine in the immediate pre-shock compression cycle. Cluster adjustment was performed due to the possibility of multiple defibrillation attempts for each patient. A total of 36 patients with 96 defibrillation attempts were included. The ROC analysis identified an optimal pre-shock interval of <3s and an optimal post-shock interval of <6s. Increased likelihood of ROSC was observed with a pre-shock interval <3s (adjusted OR 6.7, 95% CI 2.0-22.3, p=0.002) and a post-shock interval of <6s (adjusted OR 10.7, 95% CI 2.8-41.4, p=0.001). Likelihood of ROSC was substantially increased with the optimization of both pre- and post-shock intervals (adjusted OR 13.1, 95% CI 3.4-49.9, p<0.001). Decreasing pre- and post-shock compression intervals increases the likelihood of ROSC in OOHCA from VF.

  12. Improving Efficiency of Passive RFID Tag Anti-Collision Protocol Using Dynamic Frame Adjustment and Optimal Splitting.

    PubMed

    Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma

    2018-04-12

    Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.

  13. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  14. Pilot-scale treatment of atrazine production wastewater by UV/O3/ultrasound: Factor effects and system optimization.

    PubMed

    Jing, Liang; Chen, Bing; Wen, Diya; Zheng, Jisi; Zhang, Baiyu

    2017-12-01

    This study shed light on removing atrazine from pesticide production wastewater using a pilot-scale UV/O 3 /ultrasound flow-through system. A significant quadratic polynomial prediction model with an adjusted R 2 of 0.90 was obtained from central composite design with response surface methodology. The optimal atrazine removal rate (97.68%) was obtained at the conditions of 75 W UV power, 10.75 g h -1 O 3 flow rate and 142.5 W ultrasound power. A Monte Carlo simulation aided artificial neural networks model was further developed to quantify the importance of O 3 flow rate (40%), UV power (30%) and ultrasound power (30%). Their individual and interaction effects were also discussed in terms of reaction kinetics. UV and ultrasound could both enhance the decomposition of O 3 and promote hydroxyl radical (OH·) formation. Nonetheless, the dose of O 3 was the dominant factor and must be optimized because excess O 3 can react with OH·, thereby reducing the rate of atrazine degradation. The presence of other organic compounds in the background matrix appreciably inhibited the degradation of atrazine, while the effects of Cl - , CO 3 2- and HCO 3 - were comparatively negligible. It was concluded that the optimization of system performance using response surface methodology and neural networks would be beneficial for scaling up the treatment by UV/O 3 /ultrasound at industrial level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Instrumentation and optimization of intra-cavity fiber laser gas absorption sensing system

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Liu, Tiegen; Jiang, Junfeng; Liang, Xiao; Zhang, Yimo

    2011-11-01

    Detection of pollution, inflammable, explosive gases such as methane, acetylene, carbon monoxide and so on is very important for many areas, such as environmental, mining and petrochemical industry. Intra-cavity gas absorption sensing technique (ICGAST) based on Erbium-doped fiber ring laser (EDFRL) is one of novel methods for trace gas with higher precision. It has attracted considerable attention, and many research institutes focus on it. Instrumentation and optimization of ICGAST was reported in this paper. The system consists of five parts, which are variable gain module, intelligent frequency-selection module, gas cell, DAQ module and computer respectively. Variable gain module and intelligent frequency-selection module are combined to establish the intra-cavity of the ring laser. Gas cell is used as gas sensor. DAQ module is used to realize data acquisition synchronously. And gas demodulation is finished in the computer finally. The system was optimized by adjusting the sequence of the components. Take experimental simulation as an example, the absorptance of gas was increased five times after optimization, and the sensitivity enhancement factor can reach more than twenty. By using Fabry-Perot (F-P) etalon, the absorption wavelength of the detected gas can be obtained, with error less than 20 pm. The spectra of the detected gas can be swept continuously to obtain several absorption lines in one loop. The coefficient of variation (CV) was used to show the repeatability of gas concentration detection. And results of CV value can be less than 0.014.

  16. Additional double-wall roof in single-wall, closed, convective incubators: Impact on body heat loss from premature infants and optimal adjustment of the incubator air temperature.

    PubMed

    Delanaud, Stéphane; Decima, Pauline; Pelletier, Amandine; Libert, Jean-Pierre; Stephan-Blanchard, Erwan; Bach, Véronique; Tourneux, Pierre

    2016-09-01

    Radiant heat loss is high in low-birth-weight (LBW) neonates. Double-wall or single-wall incubators with an additional double-wall roof panel that can be removed during phototherapy are used to reduce Radiant heat loss. There are no data on how the incubators should be used when this second roof panel is removed. The aim of the study was to assess the heat exchanges in LBW neonates in a single-wall incubator with and without an additional roof panel. To determine the optimal thermoneutral incubator air temperature. Influence of the additional double-wall roof was assessed by using a thermal mannequin simulating a LBW neonate. Then, we calculated the optimal incubator air temperature from a cohort of human LBW neonate in the absence of the additional roof panel. Twenty-three LBW neonates (birth weight: 750-1800g; gestational age: 28-32 weeks) were included. With the additional roof panel, R was lower but convective and evaporative skin heat losses were greater. This difference can be overcome by increasing the incubator air temperature by 0.15-0.20°C. The benefit of an additional roof panel was cancelled out by greater body heat losses through other routes. Understanding the heat transfers between the neonate and the environment is essential for optimizing incubators. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  18. Effects of running with backpack loads during simulated gravitational transitions: Improvements in postural control

    NASA Astrophysics Data System (ADS)

    Brewer, Jeffrey David

    The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.

  19. Risk Selection, Risk Adjustment and Choice: Concepts and Lessons from the Americas

    PubMed Central

    Ellis, Randall P.; Fernandez, Juan Gabriel

    2013-01-01

    Interest has grown worldwide in risk adjustment and risk sharing due to their potential to contain costs, improve fairness, and reduce selection problems in health care markets. Significant steps have been made in the empirical development of risk adjustment models, and in the theoretical foundations of risk adjustment and risk sharing. This literature has often modeled the effects of risk adjustment without highlighting the institutional setting, regulations, and diverse selection problems that risk adjustment is intended to fix. Perhaps because of this, the existing literature and their recommendations for optimal risk adjustment or optimal payment systems are sometimes confusing. In this paper, we present a unified way of thinking about the organizational structure of health care systems, which enables us to focus on two key dimensions of markets that have received less attention: what choices are available that may lead to selection problems, and what financial or regulatory tools other than risk adjustment are used to influence these choices. We specifically examine the health care systems, choices, and problems in four countries: the US, Canada, Chile, and Colombia, and examine the relationship between selection-related efficiency and fairness problems and the choices that are allowed in each country, and discuss recent regulatory reforms that affect choices and selection problems. In this sample, countries and insurance programs with more choices have more selection problems. PMID:24284351

  20. Beam-energy-spread minimization using cell-timing optimization

    NASA Astrophysics Data System (ADS)

    Rose, C. R.; Ekdahl, C.; Schulze, M.

    2012-04-01

    Beam energy spread, and related beam motion, increase the difficulty in tuning for multipulse radiographic experiments at the dual-axis radiographic hydrodynamic test facility’s axis-II linear induction accelerator (LIA). In this article, we describe an optimization method to reduce the energy spread by adjusting the timing of the cell voltages (both unloaded and loaded), either advancing or retarding, such that the injector voltage and summed cell voltages in the LIA result in a flatter energy profile. We developed a nonlinear optimization routine which accepts as inputs the 74 cell-voltage, injector voltage, and beam current waveforms. It optimizes cell timing per user-selected groups of cells and outputs timing adjustments, one for each of the selected groups. To verify the theory, we acquired and present data for both unloaded and loaded cell-timing optimizations. For the unloaded cells, the preoptimization baseline energy spread was reduced by 34% and 31% for two shots as compared to baseline. For the loaded-cell case, the measured energy spread was reduced by 49% compared to baseline.

  1. OptFlux: an open-source software platform for in silico metabolic engineering.

    PubMed

    Rocha, Isabel; Maia, Paulo; Evangelista, Pedro; Vilaça, Paulo; Soares, Simão; Pinto, José P; Nielsen, Jens; Patil, Kiran R; Ferreira, Eugénio C; Rocha, Miguel

    2010-04-19

    Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community. Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models.

  2. OptFlux: an open-source software platform for in silico metabolic engineering

    PubMed Central

    2010-01-01

    Background Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. Results OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. Conclusions The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community. Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models. PMID:20403172

  3. [Simulation on remediation of benzene contaminated groundwater by air sparging].

    PubMed

    Fan, Yan-Ling; Jiang, Lin; Zhang, Dan; Zhong, Mao-Sheng; Jia, Xiao-Yang

    2012-11-01

    Air sparging (AS) is one of the in situ remedial technologies which are used in groundwater remediation for pollutions with volatile organic compounds (VOCs). At present, the field design of air sparging system was mainly based on experience due to the lack of field data. In order to obtain rational design parameters, the TMVOC module in the Petrasim software package, combined with field test results on a coking plant in Beijing, is used to optimize the design parameters and simulate the remediation process. The pilot test showed that the optimal injection rate was 23.2 m3 x h(-1), while the optimal radius of influence (ROI) was 5 m. The simulation results revealed that the pressure response simulated by the model matched well with the field test results, which indicated a good representation of the simulation. The optimization results indicated that the optimal injection location was at the bottom of the aquifer. Furthermore, simulated at the optimized injection location, the optimal injection rate was 20 m3 x h(-1), which was in accordance with the field test result. Besides, 3 m was the optimal ROI, less than the field test results, and the main reason was that field test reflected the flow behavior at the upper space of groundwater and unsaturated area, in which the width of flow increased rapidly, and became bigger than the actual one. With the above optimized operation parameters, in addition to the hydro-geological parameters measured on site, the model simulation result revealed that 90 days were needed to remediate the benzene from 371 000 microg x L(-1) to 1 microg x L(-1) for the site, and that the opeation model in which the injection wells were progressively turned off once the groundwater around them was "clean" was better than the one in which all the wells were kept operating throughout the remediation process.

  4. Simulation as a learning strategy: supporting undergraduate nursing students with disabilities.

    PubMed

    Azzopardi, Toni; Johnson, Amanda; Phillips, Kirrilee; Dickson, Cathy; Hengstberger-Sims, Cecily; Goldsmith, Mary; Allan, Trevor

    2014-02-01

    To promote simulation as a learning strategy to support undergraduate nursing students with disabilities. Supporting undergraduate nursing students with disabilities has gained further momentum because of amendments to the Disability Discrimination Act in 2009. Providers of higher education must now ensure proactive steps to prevent discrimination against students with a disability are implemented to assist in course progression. Simulation allows for the impact of a student's disability to be assessed and informs the determination of reasonable adjustments to be implemented. Further suitable adjustments can then be determined in a safe environment and evaluated prior to scheduled placement. Auditing in this manner, offers a risk management strategy for all while maintaining the academic integrity of the program. Discursive. Low, medium and high fidelity simulation activities critically analysed and their application to support undergraduate nursing students with disabilities assessed. With advancing technology and new pedagogical approaches simulation as a learning strategy can play a significant role. In this role, simulation supports undergraduate nursing students with disabilities to meet course requirements, while offering higher education providers an important risk management strategy. The discussion recommends simulation is used to inform the determination of reasonable adjustments for undergraduate nursing students with disabilities as an effective, contemporary curriculum practice. Adoption of simulation, in this way, will meet three imperatives: comply with current legislative requirements, embrace advances in learning technologies and embed one of the six principles of inclusive curriculum. Achieving these imperatives is likely to increase accessibility for all students and offer students with a disability a supportive learning experience. Provides capacity to systematically assess, monitor, evaluate and support students with a disability. The students' reasonable adjustments can be determined prior to attending clinical practice to minimise risks and ensure the safety of all. © 2013 Blackwell Publishing Ltd.

  5. Improved ant colony optimization for optimal crop and irrigation water allocation by incorporating domain knowledge

    USDA-ARS?s Scientific Manuscript database

    An improved ant colony optimization (ACO) formulation for the allocation of crops and water to different irrigation areas is developed. The formulation enables dynamic adjustment of decision variable options and makes use of visibility factors (VFs, the domain knowledge that can be used to identify ...

  6. Brief report: Assessing dispositional optimism in adolescence--factor structure and concurrent validity of the Life Orientation Test--Revised.

    PubMed

    Monzani, Dario; Steca, Patrizia; Greco, Andrea

    2014-02-01

    Dispositional optimism is an individual difference promoting psychosocial adjustment and well-being during adolescence. Dispositional optimism was originally defined as a one-dimensional construct; however, empirical evidence suggests two correlated factors in the Life Orientation Test - Revised (LOT-R). The main aim of the study was to evaluate the dimensionality of the LOT-R. This study is the first attempt to identify the best factor structure, comparing congeneric, two correlated-factor, and two orthogonal-factor models in a sample of adolescents. Concurrent validity was also assessed. The results demonstrated the superior fit of the two orthogonal-factor model thus reconciling the one-dimensional definition of dispositional optimism with the bi-dimensionality of the LOT-R. Moreover, the results of correlational analyses proved the concurrent validity of this self-report measure: optimism is moderately related to indices of psychosocial adjustment and well-being. Thus, the LOT-R is a useful, valid, and reliable self-report measure to properly assess optimism in adolescence. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  7. EVA Suit R and D for Performance Optimization

    NASA Technical Reports Server (NTRS)

    Cowley, Matthew S.; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2014-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for R&D are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques which focus on human-centric designs by creating virtual prototype simulations and fully adjustable physical prototypes of suit hardware. During the R&D design phase, these easily modifiable representations of an EVA suit's hard components will allow designers to think creatively and exhaust design possibilities before they build and test working prototypes with human subjects. It allows scientists to comprehensively benchmark current suit capabilities and limitations for existing suit sizes and sizes that do not exist. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process, enables the use of human performance as design criteria, and enables designs to target specific populations

  8. Low-cost soft-copy display accuracy in the detection of pulmonary nodules by single-exposure dual-energy subtraction: comparison with hard-copy viewing.

    PubMed

    Kido, S; Kuriyama, K; Hosomi, N; Inoue, E; Kuroda, C; Horai, T

    2000-02-01

    This study endeavored to clarify the usefulness of single-exposure dual-energy subtraction computed radiography (CR) of the chest and the ability of soft-copy images to detect low-contrast simulated pulmonary nodules. Conventional and bone-subtracted CR images of 25 chest phantom image sets with a low-contrast nylon nodule and 25 without a nodule were interpreted by 12 observers (6 radiologists, 6 chest physicians) who rated each on a continuous confidence scale and marked the position of the nodule if one was present. Hard-copy images were 7 x 7-inch laser-printed CR films, and soft-copy images were displayed on a 21-inch noninterlaced color CRT monitor with an optimized dynamic range. Soft-copy images were adjusted to the same size as hard-copy images and were viewed under darkened illumination in the reading room. No significant differences were found between hard- and soft-copy images. In conclusion, the soft-copy images were found to be useful in detecting low-contrast simulated pulmonary nodules.

  9. Thermodynamic models for vapor-liquid equilibria of nitrogen + oxygen + carbon dioxide at low temperatures

    NASA Astrophysics Data System (ADS)

    Vrabec, Jadran; Kedia, Gaurav Kumar; Buchhauser, Ulrich; Meyer-Pittroff, Roland; Hasse, Hans

    2009-02-01

    For the design and optimization of CO 2 recovery from alcoholic fermentation processes by distillation, models for vapor-liquid equilibria (VLE) are needed. Two such thermodynamic models, the Peng-Robinson equation of state (EOS) and a model based on Henry's law constants, are proposed for the ternary mixture N 2 + O 2 + CO 2. Pure substance parameters of the Peng-Robinson EOS are taken from the literature, whereas the binary parameters of the Van der Waals one-fluid mixing rule are adjusted to experimental binary VLE data. The Peng-Robinson EOS describes both binary and ternary experimental data well, except at high pressures approaching the critical region. A molecular model is validated by simulation using binary and ternary experimental VLE data. On the basis of this model, the Henry's law constants of N 2 and O 2 in CO 2 are predicted by molecular simulation. An easy-to-use thermodynamic model, based on those Henry's law constants, is developed to reliably describe the VLE in the CO 2-rich region.

  10. Simulation, Analysis, and Design of the Princeton Adaptable Stellarator for Education and Outreach (PASEO)

    NASA Astrophysics Data System (ADS)

    Carlson, Jared; Dominguez, Arturo; N/A Collaboration

    2017-10-01

    The PPPL Science Education Department, in collaboration with IPP, is currently developing a versatile small scale Stellarator for education and outreach purposes. The Princeton Adaptable Stellarator for Education and Outreach (PASEO) will provide visual demonstrations of Stellarator physics and serve as a lab platform for undergraduate and graduate students. Based off the Columbia Non-Neutral Torus (CNT) (1), and mini-CNTs (2), PASEO will create pure electron plasmas to study magnetic surfaces. PASEO uses similar geometries to these, but has an adjustable coil configuration to increase its versatility and conform to a highly visible vacuum chamber geometry. To simulate the magnetic surfaces in these new configurations, a MATALB code utilizing the Biot Savart law and a Fourth Order Runge-Kutta method was developed, leading to new optimal current ratios. The design for PASEO and its predicted plasma confinement are presented. (1) T.S. Pedersen et al., Fusion Science and Technology Vol. 46 July 2004 (2) C. Dugan, et al., American Physical Society; 48th Annual Meeting of the Division of Plasma Physics, October 30-November 3, 2006

  11. Multiobjective optimization of low impact development stormwater controls

    NASA Astrophysics Data System (ADS)

    Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati

    2018-07-01

    Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.

  12. Using applet-servlet communication for optimizing window, level and crop for DICOM to JPEG conversion.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E

    2008-09-01

    In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.

  13. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  14. On the Importance of Reliable Covariate Measurement in Selection Bias Adjustments Using Propensity Scores

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Cook, Thomas D.; Shadish, William R.

    2011-01-01

    The effect of unreliability of measurement on propensity score (PS) adjusted treatment effects has not been previously studied. The authors report on a study simulating different degrees of unreliability in the multiple covariates that were used to estimate the PS. The simulation uses the same data as two prior studies. Shadish, Clark, and Steiner…

  15. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  16. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  17. Design of underwater robot lines based on a hybrid automatic optimization strategy

    NASA Astrophysics Data System (ADS)

    Lyu, Wenjing; Luo, Weilin

    2014-09-01

    In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.

  18. Conceptualizing and measuring illness self-concept: a comparison with self-esteem and optimism in predicting fibromyalgia adjustment.

    PubMed

    Morea, Jessica M; Friend, Ronald; Bennett, Robert M

    2008-12-01

    Illness self-concept (ISC), or the extent to which individuals are consumed by their illness, was theoretically described and evaluated with the Illness Self-Concept Scale (ISCS), a new 23-item scale, to predict adjustment in fibromyalgia. To establish convergent and discriminant validity, illness self-concept was compared to self-esteem and optimism in predicting health status, illness intrusiveness, depression, and life satisfaction. The ISCS demonstrated good reliability (alpha = .94; test-retest r = .80) and was a strong predictor of outcomes, even after controlling for optimism or self-esteem. The ISCS predicted unique variance in health-related outcomes; optimism and self-esteem did not, providing construct validation. Illness self-concept may play a significant role in coping with fibromyalgia and may prove useful in the evaluation of other chronic illnesses. (c) 2008 Wiley Periodicals, Inc.

  19. The role of root distribution in eco-hydrological modeling in semi-arid regions

    NASA Astrophysics Data System (ADS)

    Sivandran, G.; Bras, R. L.

    2010-12-01

    In semi arid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Niche separation, through rooting strategies, is one manner in which different species coexist. At present, land surface models prescribe rooting profiles as a function of only the plant functional type of interest with no consideration for the soil texture or rainfall regime of the region being modeled. These models do not incorporate the ability of vegetation to dynamically alter their rooting strategies in response to transient changes in environmental forcings and therefore tend to underestimate the resilience of many of these ecosystems. A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point scale simulations were carried out using two vertical root distribution schemes: (i) Static - a temporally invariant root distribution; and (ii) Dynamic - a temporally variable allocation of assimilated carbon at any depth within the root zone in order to minimize the soil moisture-induced stress on the vegetation. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semi-arid Walnut Gulch Experimental Watershed in Arizona. For the static root distribution scheme, a series of simulations were carried out varying the shape of the rooting profile. The optimal distribution for the simulation was defined as the root distribution with the maximum mean transpiration over a 200 year period. This optimal distribution was determined for 5 soil textures and using 2 plant functional types, and the results varied from case to case. The dynamic rooting simulations allow vegetation the freedom to adjust the allocation of assimilated carbon to different rooting depths in response to changes in stress caused by the redistribution and uptake of soil moisture. The results obtained from these experiments elucidate the strong link between plant functional type, soil texture and climate and highlight the potential errors in the modeling of hydrologic fluxes from imposing a static root profile.

  20. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  1. A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing.

    PubMed

    Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju

    2004-10-01

    Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

  2. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  3. Optimization of coronary attenuation in coronary computed tomography angiography using diluted contrast material.

    PubMed

    Kawaguchi, Naoto; Kurata, Akira; Kido, Teruhito; Nishiyama, Yoshiko; Kido, Tomoyuki; Miyagawa, Masao; Ogimoto, Akiyoshi; Mochizuki, Teruhito

    2014-01-01

    The purpose of this study was to evaluate a personalized protocol with diluted contrast material (CM) for coronary computed tomography angiography (CTA). One hundred patients with suspected coronary artery disease underwent retrospective electrocardiogram-gated coronary CTA on a 256-slice multidetector-row CT scanner. In the diluted CM protocol (n=50), the optimal scan timing and CM dilution rate were determined by the timing bolus scan, with 20% CM dilution (5ml/s during 10s) being considered suitable to achieve the target arterial attenuation of 350 Hounsfield units (HU). In the body weight (BW)-adjusted protocol (n=50, 222mg iodine/kg), only the optimal scan timing was determined by the timing bolus scan. The injection rate and volume in the timing bolus scan and real scan were identical between the 2 protocols. We compared the means and variations in coronary attenuation between the 2 protocols. Coronary attenuation (mean±SD) in the diluted CM and BW-adjusted protocols was 346.1±23.9 HU and 298.8±45.2 HU, respectively. The diluted CM protocol provided significantly higher coronary attenuation and lower variance than did the BW-adjusted protocol (P<0.05, in each). The diluted CM protocol facilitates more uniform attenuation on coronary CTA in comparison with the BW-adjusted protocol.  

  4. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.

  5. Using a coupled inductor controlled by fuzzy logic to improve the efficiency of a Buck converter in a PV system

    NASA Astrophysics Data System (ADS)

    Abouchabana, Nabil; Haddadi, Mourad; Rabhi, Abdelhamid; El Hajjaji, Ahmed

    2017-11-01

    Photovoltaic generators (PVG) produce a variable power according to the solar radiation (G) and temperature (T). This variation affects the sizing of the components of DC / DC converters, powered by such PVG, and make it difficult. The effects may differ from one component to another. The main and critical one is presented by the inductor, the element that stores the energy during sampled periods. We propose in this work an auto-adaptation of these inductor values to maintain optimal performance of the power yield of these converters. Our idea is to replace the inductor by a coupled inductor where this adjustment is made by the addition of an adjustable electric field in the magnetic core. Low current intensities come from the PVG supply the second inductor of the coupled inductor through a circuit controlled by a fuzzy controller (FC). The whole system is modeled and simulated under MATLAB/SIMULINK for the control part of the system and under PSPICE for the power part of the system. The obtained results show good performances of the proposed converter over the standard one.

  6. Channel Acquisition for Massive MIMO-OFDM With Adjustable Phase Shift Pilots

    NASA Astrophysics Data System (ADS)

    You, Li; Gao, Xiqi; Swindlehurst, A. Lee; Zhong, Wen

    2016-03-01

    We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios.

  7. Model predictive control of an air suspension system with damping multi-mode switching damper based on hybrid model

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long

    2017-09-01

    This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.

  8. The effects of variable biome distribution on global climate.

    PubMed

    Noever, D A; Brittain, A; Matsos, H C; Baskaran, S; Obenhuber, D

    1996-01-01

    In projecting climatic adjustments to anthropogenically elevated atmospheric carbon dioxide, most global climate models fix biome distribution to current geographic conditions. Previous biome maps either remain unchanging or shift without taking into account climatic feedbacks such as radiation and temperature. We develop a model that examines the albedo-related effects of biome distribution on global temperature. The model was tested on historical biome changes since 1860 and the results fit both the observed temperature trend and order of magnitude change. The model is then used to generate an optimized future biome distribution that minimizes projected greenhouse effects on global temperature. Because of the complexity of this combinatorial search, an artificial intelligence method, the genetic algorithm, was employed. The method is to adjust biome areas subject to a constant global temperature and total surface area constraint. For regulating global temperature, oceans are found to dominate continental biomes. Algal beds are significant radiative levers as are other carbon intensive biomes including estuaries and tropical deciduous forests. To hold global temperature constant over the next 70 years this simulation requires that deserts decrease and forested areas increase. The effect of biome change on global temperature is revealed as a significant forecasting factor.

  9. Flat-Band Slow Light in a Photonic Crystal Slab Waveguide by Vertical Geometry Adjustment and Selective Infiltration of Optofluidics

    NASA Astrophysics Data System (ADS)

    Mansouri-Birjandi, Mohammad Ali; Janfaza, Morteza; Tavousi, Alireza

    2017-11-01

    In this paper, a photonic crystal slab waveguide (PhCSW) for slow light applications is presented. To obtain widest possible flat-bands of slow light regions—regions with large group index ( n g), and very low group velocity dispersion (GVD)—two core parameters of PhCSW structure are investigated. The design procedure is based on vertical shifting of the first row of the air holes adjacent to the waveguide center and concurrent selective optofluidic infiltration of the second row. The criteria of < n_g > ± 10% variations is used for ease of definition and comparison of flat-band regions. By applying various geometry optimizations for the first row, our results suggest that a waveguide core of W 1.09 would provide a reasonable wide flat-band. Furthermore, infiltration of optofluidics in the second row alongside with geometry adjustments of the first row result in flexible control of 10 < n g < 32 and provide flat-band regions with large bandwidth (10 nm < Δ λ < 21.5 nm). Also, negligible GVD as low as β 2 = 10-24 (s2/m) is achieved. Numerical simulations are calculated by means of the three-dimensional plane wave expansion method.

  10. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  11. A comparison of renewable energy technologies using two simulation softwares: HOMER and RETScreen

    NASA Astrophysics Data System (ADS)

    Ramli, Mohd Sufian; Wahid, Siti Sufiah Abd; Hassan, Khairul Kamarudin

    2017-08-01

    This paper concerns on modelling renewable energy technologies including PV standalone system (PVSS) and wind standalone system (WSS) as well as PV-wind hybrid system (PVWHS). To evaluate the performance of all power system configurations in term of economic analysis and optimization, simulation tools called HOMER and RETScreen are used in this paper. HOMER energy modeling software is a powerful tool for designing and analyzing hybrid power systems, which contains a mix of conventional generators, wind turbines, solar photovoltaic's, hydropower, batteries, and other inputs. RETScreen uses a Microsoft Excel-based spreadsheet model that consists of a set of workbooks which calculates the annual average energy flows with adjustment factors to account for temporal effects such as solar-load coincidence. Sizes of equipments are calculated and inserted as inputs to HOMER and RETScreen. The result obtained are analyzed and discussed. The cost per kWh to generate electricity using the PVSS system to supply the average demand of 8.4 kWh/day ranges between RM 1.953/kWh to RM 3.872/kWh. It has been found that the PVSS gives the lowest cost of energy compared to the other proposed two technologies that have been simulated by using HOMER and RETScreen.

  12. One-Dimensional Transport with Inflow and Storage (OTIS): A Solute Transport Model for Streams and Rivers

    USGS Publications Warehouse

    Runkel, Robert L.

    1998-01-01

    OTIS is a mathematical simulation model used to characterize the fate and transport of water-borne solutes in streams and rivers. The governing equation underlying the model is the advection-dispersion equation with additional terms to account for transient storage, lateral inflow, first-order decay, and sorption. This equation and the associated equations describing transient storage and sorption are solved using a Crank-Nicolson finite-difference solution. OTIS may be used in conjunction with data from field-scale tracer experiments to quantify the hydrologic parameters affecting solute transport. This application typically involves a trial-and-error approach wherein parameter estimates are adjusted to obtain an acceptable match between simulated and observed tracer concentrations. Additional applications include analyses of nonconservative solutes that are subject to sorption processes or first-order decay. OTIS-P, a modified version of OTIS, couples the solution of the governing equation with a nonlinear regression package. OTIS-P determines an optimal set of parameter estimates that minimize the squared differences between the simulated and observed concentrations, thereby automating the parameter estimation process. This report details the development and application of OTIS and OTIS-P. Sections of the report describe model theory, input/output specifications, sample applications, and installation instructions.

  13. Two-dimensional vocal tracts with three-dimensional behavior in the numerical generation of vowels.

    PubMed

    Arnela, Marc; Guasch, Oriol

    2014-01-01

    Two-dimensional (2D) numerical simulations of vocal tract acoustics may provide a good balance between the high quality of three-dimensional (3D) finite element approaches and the low computational cost of one-dimensional (1D) techniques. However, 2D models are usually generated by considering the 2D vocal tract as a midsagittal cut of a 3D version, i.e., using the same radius function, wall impedance, glottal flow, and radiation losses as in 3D, which leads to strong discrepancies in the resulting vocal tract transfer functions. In this work, a four step methodology is proposed to match the behavior of 2D simulations with that of 3D vocal tracts with circular cross-sections. First, the 2D vocal tract profile becomes modified to tune the formant locations. Second, the 2D wall impedance is adjusted to fit the formant bandwidths. Third, the 2D glottal flow gets scaled to recover 3D pressure levels. Fourth and last, the 2D radiation model is tuned to match the 3D model following an optimization process. The procedure is tested for vowels /a/, /i/, and /u/ and the obtained results are compared with those of a full 3D simulation, a conventional 2D approach, and a 1D chain matrix model.

  14. Microstructure based procedure for process parameter control in rolling of aluminum thin foils

    NASA Astrophysics Data System (ADS)

    Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph

    2018-05-01

    In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.

  15. Design and experiment of vehicular charger AC/DC system based on predictive control algorithm

    NASA Astrophysics Data System (ADS)

    He, Guangbi; Quan, Shuhai; Lu, Yuzhang

    2018-06-01

    For the car charging stage rectifier uncontrollable system, this paper proposes a predictive control algorithm of DC/DC converter based on the prediction model, established by the state space average method and its prediction model, obtained by the optimal mathematical description of mathematical calculation, to analysis prediction algorithm by Simulink simulation. The design of the structure of the car charging, at the request of the rated output power and output voltage adjustable control circuit, the first stage is the three-phase uncontrolled rectifier DC voltage Ud through the filter capacitor, after by using double-phase interleaved buck-boost circuit with wide range output voltage required value, analyzing its working principle and the the parameters for the design and selection of components. The analysis of current ripple shows that the double staggered parallel connection has the advantages of reducing the output current ripple and reducing the loss. The simulation experiment of the whole charging circuit is carried out by software, and the result is in line with the design requirements of the system. Finally combining the soft with hardware circuit to achieve charging of the system according to the requirements, experimental platform proved the feasibility and effectiveness of the proposed predictive control algorithm based on the car charging of the system, which is consistent with the simulation results.

  16. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  17. The Effects of Longitudinal Control-System Dynamics on Pilot Opinion and Response Characteristics as Determined from Flight Tests and from Ground Simulator Studies

    NASA Technical Reports Server (NTRS)

    Sadoff, Melvin

    1958-01-01

    The results of a fixed-base simulator study of the effects of variable longitudinal control-system dynamics on pilot opinion are presented and compared with flight-test data. The control-system variables considered in this investigation included stick force per g, time constant, and dead-band, or stabilizer breakout force. In general, the fairly good correlation between flight and simulator results for two pilots demonstrates the validity of fixed-base simulator studies which are designed to complement and supplement flight studies and serve as a guide in control-system preliminary design. However, in the investigation of certain problem areas (e.g., sensitive control-system configurations associated with pilot- induced oscillations in flight), fixed-base simulator results did not predict the occurrence of an instability, although the pilots noted the system was extremely sensitive and unsatisfactory. If it is desired to predict pilot-induced-oscillation tendencies, tests in moving-base simulators may be required. It was found possible to represent the human pilot by a linear pilot analog for the tracking task assumed in the present study. The criterion used to adjust the pilot analog was the root-mean-square tracking error of one of the human pilots on the fixed-base simulator. Matching the tracking error of the pilot analog to that of the human pilot gave an approximation to the variation of human-pilot behavior over a range of control-system dynamics. Results of the pilot-analog study indicated that both for optimized control-system dynamics (for poor airplane dynamics) and for a region of good airplane dynamics, the pilot response characteristics are approximately the same.

  18. Capacity improvement using simulation optimization approaches: A case study in the thermotechnology industry

    NASA Astrophysics Data System (ADS)

    Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz

    2015-02-01

    In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.

  19. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  20. Magnetic levitation-based Martian and Lunar gravity simulator

    NASA Technical Reports Server (NTRS)

    Valles, J. M. Jr; Maris, H. J.; Seidel, G. M.; Tang, J.; Yao, W.

    2005-01-01

    Missions to Mars will subject living specimens to a range of low gravity environments. Deleterious biological effects of prolonged exposure to Martian gravity (0.38 g), Lunar gravity (0.17 g), and microgravity are expected, but the mechanisms involved and potential for remedies are unknown. We are proposing the development of a facility that provides a simulated Martian and Lunar gravity environment for experiments on biological systems in a well controlled laboratory setting. The magnetic adjustable gravity simulator will employ intense, inhomogeneous magnetic fields to exert magnetic body forces on a specimen that oppose the body force of gravity. By adjusting the magnetic field, it is possible to continuously adjust the total body force acting on a specimen. The simulator system considered consists of a superconducting solenoid with a room temperature bore sufficiently large to accommodate small whole organisms, cell cultures, and gravity sensitive bio-molecular solutions. It will have good optical access so that the organisms can be viewed in situ. This facility will be valuable for experimental observations and public demonstrations of systems in simulated reduced gravity. c2005 Published by Elsevier Ltd on behalf of COSPAR.

Top