Sample records for simulation reveals optimal

  1. [Simulation on remediation of benzene contaminated groundwater by air sparging].

    PubMed

    Fan, Yan-Ling; Jiang, Lin; Zhang, Dan; Zhong, Mao-Sheng; Jia, Xiao-Yang

    2012-11-01

    Air sparging (AS) is one of the in situ remedial technologies which are used in groundwater remediation for pollutions with volatile organic compounds (VOCs). At present, the field design of air sparging system was mainly based on experience due to the lack of field data. In order to obtain rational design parameters, the TMVOC module in the Petrasim software package, combined with field test results on a coking plant in Beijing, is used to optimize the design parameters and simulate the remediation process. The pilot test showed that the optimal injection rate was 23.2 m3 x h(-1), while the optimal radius of influence (ROI) was 5 m. The simulation results revealed that the pressure response simulated by the model matched well with the field test results, which indicated a good representation of the simulation. The optimization results indicated that the optimal injection location was at the bottom of the aquifer. Furthermore, simulated at the optimized injection location, the optimal injection rate was 20 m3 x h(-1), which was in accordance with the field test result. Besides, 3 m was the optimal ROI, less than the field test results, and the main reason was that field test reflected the flow behavior at the upper space of groundwater and unsaturated area, in which the width of flow increased rapidly, and became bigger than the actual one. With the above optimized operation parameters, in addition to the hydro-geological parameters measured on site, the model simulation result revealed that 90 days were needed to remediate the benzene from 371 000 microg x L(-1) to 1 microg x L(-1) for the site, and that the opeation model in which the injection wells were progressively turned off once the groundwater around them was "clean" was better than the one in which all the wells were kept operating throughout the remediation process.

  2. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    PubMed

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A Monte Carlo simulation and setup optimization of output efficiency to PGNAA thermal neutron using 252Cf neutrons

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Zhao; Tuo, Xian-Guo

    2014-07-01

    We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.

  4. Vestibular models for design and evaluation of flight simulator motion

    NASA Technical Reports Server (NTRS)

    Bussolari, S. R.; Sullivan, R. B.; Young, L. R.

    1986-01-01

    The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.

  5. Numerical Simulations of SCR DeNOx System for a 660MW coal-fired power station

    NASA Astrophysics Data System (ADS)

    Yongqiang, Deng; Zhongming, Mei; Yijun, Mao; Nianping, Liu; Guoming, Yin

    2018-06-01

    Aimed at the selective catalytic reduction (SCR) DeNOx system of a 660 MW coal-fired power station, which is limited by low denitrification efficiency, large ammonia consumption and over-high ammonia escape rate, numerical simulations were conducted by employing STAR-CCM+ (CFD tool). The simulations results revealed the problems existed in the SCR DeNOx system. Aimed at limitations of the target SCR DeNOx system, factors affecting the denitrification performance of SCR, including the structural parameters and ammonia injected by the ammonia nozzles, were optimized. Under the optimized operational conditions, the denitrification efficiency of the SCR system was enhanced, while the ammonia escape rate was reduced below 3ppm. This study serves as references for optimization and modification of SCR systems.

  6. Optimization of automotive Rankine cycle waste heat recovery under various engine operating condition

    NASA Astrophysics Data System (ADS)

    Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi

    2017-02-01

    An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.

  7. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    NASA Astrophysics Data System (ADS)

    Baumgartner, Stefan; Heitzinger, Clemens; Vacic, Aleksandar; Reed, Mark A.

    2013-06-01

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor.

  8. Geometrical optimization of a local ballistic magnetic sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya

    2014-04-07

    We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.

  9. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    PubMed

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.

  10. Implications of optimization cost for balancing exploration and exploitation in global search and for experimental optimization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Anirban

    Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.

  11. Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.

  12. Quantum state transfer in double-quantum-well devices

    NASA Technical Reports Server (NTRS)

    Jakumeit, Jurgen; Tutt, Marcel; Pavlidis, Dimitris

    1994-01-01

    A Monte Carlo simulation of double-quantum-well (DQW) devices is presented in view of analyzing the quantum state transfer (QST) effect. Different structures, based on the AlGaAs/GaAs system, were simulated at 77 and 300 K and optimized in terms of electron transfer and device speed. The analysis revealed the dominant role of the impurity scattering for the QST. Different approaches were used for the optimization of QST devices and basic physical limitations were found in the electron transfer between the QWs. The maximum transfer of electrons from a high to a low mobility well was at best 20%. Negative differential resistance is hampered by the almost linear rather than threshold dependent relation of electron transfer on electric field. By optimizing the doping profile the operation frequency limit could be extended to 260 GHz.

  13. Optimization of cladding parameters for resisting corrosion on low carbon steels using simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Balan, A. V.; Shivasankaran, N.; Magibalan, S.

    2018-04-01

    Low carbon steels used in chemical industries are frequently affected by corrosion. Cladding is a surfacing process used for depositing a thick layer of filler metal in a highly corrosive materials to achieve corrosion resistance. Flux cored arc welding (FCAW) is preferred in cladding process due to its augmented efficiency and higher deposition rate. In this cladding process, the effect of corrosion can be minimized by controlling the output responses such as minimizing dilution, penetration and maximizing bead width, reinforcement and ferrite number. This paper deals with the multi-objective optimization of flux cored arc welding responses by controlling the process parameters such as wire feed rate, welding speed, Nozzle to plate distance, welding gun angle for super duplex stainless steel material using simulated annealing technique. Regression equation has been developed and validated using ANOVA technique. The multi-objective optimization of weld bead parameters was carried out using simulated annealing to obtain optimum bead geometry for reducing corrosion. The potentiodynamic polarization test reveals the balanced formation of fine particles of ferrite and autenite content with desensitized nature of the microstructure in the optimized clad bead.

  14. Three-link Swimming in Sand

    NASA Astrophysics Data System (ADS)

    Hatton, R. L.; Ding, Yang; Masse, Andrew; Choset, Howie; Goldman, Daniel

    2011-11-01

    Many animals move within in granular media such as desert sand. Recent biological experiments have revealed that the sandfish lizard uses an undulatory gait to swim within sand. Models reveal that swimming occurs in a frictional fluid in which inertial effects are small and kinematics dominate. To understand the fundamental mechanics of swimming in granular media (GM), we examine a model system that has been well-studied in Newtonian fluids: the three-link swimmer. We create a physical model driven by two servo-motors, and a discrete element simulation of the swimmer. To predict optimal gaits we use a recent geometric mechanics theory combined with empirically determined resistive force laws for GM. We develop a kinematic relationship between the swimmer's shape and position velocities and construct connection vector field and constraint curvature function visualizations of the system dynamics. From these we predict optimal gaits for forward, lateral and rotational motion. Experiment and simulation are in accord with the theoretical predictions; thus geometric tools can be used to study locomotion in GM.

  15. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.

  16. Exploration of optimal dosing regimens of haloperidol, a D2 Antagonist, via modeling and simulation analysis in a D2 receptor occupancy study.

    PubMed

    Lim, Hyeong-Seok; Kim, Su Jin; Noh, Yook-Hwan; Lee, Byung Chul; Jin, Seok-Joon; Park, Hyun Soo; Kim, Soohyeon; Jang, In-Jin; Kim, Sang Eun

    2013-03-01

    To evaluate the potential usage of D(2) receptor occupancy (D2RO) measured by positron emission tomography (PET) in antipsychotic development. In this randomized, parallel group study, eight healthy male volunteers received oral doses of 0.5 (n = 3), 1 (n = 2), or 3 mg (n = 3) of haloperidol once daily for 7 days. PET's were scanned before haloperidol, and on days 8, 12, with serial pharmacokinetic sampling on day 7. Pharmacokinetics and binding potential to D(2) receptor in putamen and caudate nucleus over time were analyzed using NONMEM, and simulations for the profiles of D2RO over time on various regimens of haloperidol were conducted to find the optimal dosing regimens. One compartment model with a saturable binding compartment, and inhibitory E(max) model in the effect compartment best described the data. Plasma haloperidol concentrations at half-maximal inhibition were 0.791 and 0.650 ng/ml, in putamen and caudate nucleus. Simulation suggested haloperidol 2 mg every 12 h is near the optimal dose. This study showed that sparse D2RO measurements in steady state pharmacodynamic design after multiple dosing could reveal the possibility of treatment effect of D(2) antagonist, and could identify the potential optimal doses for later clinical studies by modeling and simulation.

  17. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  18. Vibrational, spectroscopic, molecular docking and density functional theory studies on N-(5-aminopyridin-2-yl)acetamide

    NASA Astrophysics Data System (ADS)

    Asath, R. Mohamed; Rekha, T. N.; Premkumar, S.; Mathavan, T.; Benial, A. Milton Franklin

    2016-12-01

    Conformational analysis was carried out for N-(5-aminopyridin-2-yl)acetamide (APA) molecule. The most stable, optimized structure was predicted by the density functional theory calculations using the B3LYP functional with cc-pVQZ basis set. The optimized structural parameters and vibrational frequencies were calculated. The experimental and theoretical vibrational frequencies were assigned and compared. Ultraviolet-visible spectrum was simulated and validated experimentally. The molecular electrostatic potential surface was simulated. Frontier molecular orbitals and related molecular properties were computed, which reveals that the higher molecular reactivity and stability of the APA molecule and further density of states spectrum was simulated. The natural bond orbital analysis was also performed to confirm the bioactivity of the APA molecule. Antidiabetic activity was studied based on the molecular docking analysis and the APA molecule was identified that it can act as a good inhibitor against diabetic nephropathy.

  19. Employing static excitation control and tie line reactance to stabilize wind turbine generators

    NASA Technical Reports Server (NTRS)

    Hwang, H. H.; Mozeico, H. V.; Guo, T.

    1978-01-01

    An analytical representation of a wind turbine generator is presented which employs blade pitch angle feedback control. A mathematical model was formulated. With the functioning MOD-0 wind turbine serving as a practical case study, results of computer simulations of the model as applied to the problem of dynamic stability at rated load are also presented. The effect of the tower shadow was included in the input to the system. Different configurations of the drive train, and optimal values of the tie line reactance were used in the simulations. Computer results revealed that a static excitation control system coupled with optimal values of the tie line reactance would effectively reduce oscillations of the power output, without the use of a slip clutch.

  20. A modelling framework for mitigating customers' waiting time at a vehicle inspection centre

    NASA Astrophysics Data System (ADS)

    Ahmad, Norazura; Abidin, Norhaslinda Zainal; Ilyas, Khibtiyah; Abduljabbar, Waleed Khalid

    2017-11-01

    In Malaysia, an agency that is entrusted by the Government to perform mandatory vehicle inspection for public, commercial and private vehicles, receive many customers daily. Often complaints of problems received from the customers are associated with waiting time that leads to lost of business and dissatisfied customers. To address this issue, we propose a framework for modelling a vehicle inspection system using an integration of simulation and optimization approaches. The strengths of simulation and optimization are reviewed briefly that is hoped to reveal the synergy between the established methods in determining an appropriate customer's waiting time for inspection at a vehicle inspection centre. Relevant concepts and preliminary results are also presented and discussed in this paper.

  1. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  3. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  4. Honing process optimization algorithms

    NASA Astrophysics Data System (ADS)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  5. Experimental research on safety impacts of the inside shoulder based on driving simulation.

    PubMed

    Zhao, Xiaohua; Ding, Han; Wu, Yiping; Ma, Jianming; Zhong, Liande

    2015-03-01

    Statistical data shows that single-vehicle crashes account for half of all traffic crashes on expressways in China, and previous research has indicated that main contributing factors were related to whether and how the inside shoulder was paved. The inside shoulder provides space for drivers to make evasive maneuvers and accommodate driver errors. However, lower-cost construction solutions in China have resulted in the design of numerous urban expressway segments that lack inside shoulders. This paper has two objectives. The first is to reveal the safety impacts of inside shoulders on urban expressways by driving simulator experiment. The second objective is to propose optimal range and recommended value of inside shoulder width for designing inside shoulders of urban expressways. The empirical data, including subjects' eye movement data, heart rate (HR) and the lateral position of vehicles, were collected in a driving simulator. The data were analyzed to evaluate the safety impacts of the inside shoulder. The results have revealed that the inside shoulder has an impact on drivers' visual perception, behaviors, and psychology; in particular, it has a significant effect on vehicle operations. In addition, this paper recommends the desired and optimal inside shoulder widths for eight-lane, two-way divided expressways. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Optimization of an electrokinetic mixer for microfluidic applications.

    PubMed

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P J

    2012-06-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly.

  7. Optimization of an electrokinetic mixer for microfluidic applications

    PubMed Central

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P. J.

    2012-01-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly. PMID:22712034

  8. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  9. Molecular Dynamics Modeling and Simulation of Diamond Cutting of Cerium.

    PubMed

    Zhang, Junjie; Zheng, Haibing; Shuai, Maobing; Li, Yao; Yang, Yang; Sun, Tao

    2017-12-01

    The coupling between structural phase transformations and dislocations induces challenges in understanding the deformation behavior of metallic cerium at the nanoscale. In the present work, we elucidate the underlying mechanism of cerium under ultra-precision diamond cutting by means of molecular dynamics modeling and simulations. The molecular dynamics model of diamond cutting of cerium is established by assigning empirical potentials to describe atomic interactions and evaluating properties of two face-centered cubic cerium phases. Subsequent molecular dynamics simulations reveal that dislocation slip dominates the plastic deformation of cerium under the cutting process. In addition, the analysis based on atomic radial distribution functions demonstrates that there are trivial phase transformations from the γ-Ce to the δ-Ce occurred in both machined surface and formed chip. Following investigations on machining parameter dependence reveal the optimal machining conditions for achieving high quality of machined surface of cerium.

  10. Molecular Dynamics Modeling and Simulation of Diamond Cutting of Cerium

    NASA Astrophysics Data System (ADS)

    Zhang, Junjie; Zheng, Haibing; Shuai, Maobing; Li, Yao; Yang, Yang; Sun, Tao

    2017-07-01

    The coupling between structural phase transformations and dislocations induces challenges in understanding the deformation behavior of metallic cerium at the nanoscale. In the present work, we elucidate the underlying mechanism of cerium under ultra-precision diamond cutting by means of molecular dynamics modeling and simulations. The molecular dynamics model of diamond cutting of cerium is established by assigning empirical potentials to describe atomic interactions and evaluating properties of two face-centered cubic cerium phases. Subsequent molecular dynamics simulations reveal that dislocation slip dominates the plastic deformation of cerium under the cutting process. In addition, the analysis based on atomic radial distribution functions demonstrates that there are trivial phase transformations from the γ-Ce to the δ-Ce occurred in both machined surface and formed chip. Following investigations on machining parameter dependence reveal the optimal machining conditions for achieving high quality of machined surface of cerium.

  11. Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy

    PubMed Central

    Ackermann, Marko; van den Bogert, Antonie J.

    2012-01-01

    The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845

  12. Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.

    PubMed

    Ackermann, Marko; van den Bogert, Antonie J

    2012-04-30

    The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Membrane Sculpting by F-BAR Domains Studied by Molecular Dynamics Simulations

    PubMed Central

    Yu, Hang; Schulten, Klaus

    2013-01-01

    Interplay between cellular membranes and their peripheral proteins drives many processes in eukaryotic cells. Proteins of the Bin/Amphiphysin/Rvs (BAR) domain family, in particular, play a role in cellular morphogenesis, for example curving planar membranes into tubular membranes. However, it is still unclear how F-BAR domain proteins act on membranes. Electron microscopy revealed that, in vitro, F-BAR proteins form regular lattices on cylindrically deformed membrane surfaces. Using all-atom and coarse-grained (CG) molecular dynamics simulations, we show that such lattices, indeed, induce tubes of observed radii. A 250 ns all-atom simulation reveals that F-BAR domain curves membranes via the so-called scaffolding mechanism. Plasticity of the F-BAR domain permits conformational change in response to membrane interaction, via partial unwinding of the domains 3-helix bundle structure. A CG simulation covering more than 350 µs provides a dynamic picture of membrane tubulation by lattices of F-BAR domains. A series of CG simulations identified the optimal lattice type for membrane sculpting, which matches closely the lattices seen through cryo-electron microscopy. PMID:23382665

  14. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  15. Improved identification of cranial nerves using paired-agent imaging: topical staining protocol optimization through experimentation and simulation

    NASA Astrophysics Data System (ADS)

    Torres, Veronica C.; Wilson, Todd; Staneviciute, Austeja; Byrne, Richard W.; Tichauer, Kenneth M.

    2018-03-01

    Skull base tumors are particularly difficult to visualize and access for surgeons because of the crowded environment and close proximity of vital structures, such as cranial nerves. As a result, accidental nerve damage is a significant concern and the likelihood of tumor recurrence is increased because of more conservative resections that attempt to avoid injuring these structures. In this study, a paired-agent imaging method with direct administration of fluorophores is applied to enhance cranial nerve identification. Here, a control imaging agent (ICG) accounts for non-specific uptake of the nerve-targeting agent (Oxazine 4), and ratiometric data analysis is employed to approximate binding potential (BP, a surrogate of targeted biomolecule concentration). For clinical relevance, animal experiments and simulations were conducted to identify parameters for an optimized stain and rinse protocol using the developed paired-agent method. Numerical methods were used to model the diffusive and kinetic behavior of the imaging agents in tissue, and simulation results revealed that there are various combinations of stain time and rinse number that provide improved contrast of cranial nerves, as suggested by optimal measures of BP and contrast-to-noise ratio.

  16. Optimized radiation-hardened erbium doped fiber amplifiers for long space missions

    NASA Astrophysics Data System (ADS)

    Ladaci, A.; Girard, S.; Mescia, L.; Robin, T.; Laurent, A.; Cadier, B.; Boutillier, M.; Ouerdane, Y.; Boukenter, A.

    2017-04-01

    In this work, we developed and exploited simulation tools to optimize the performances of rare earth doped fiber amplifiers (REDFAs) for space missions. To describe these systems, a state-of-the-art model based on the rate equations and the particle swarm optimization technique is developed in which we also consider the main radiation effect on REDFA: the radiation induced attenuation (RIA). After the validation of this tool set by confrontation between theoretical and experimental results, we investigate how the deleterious radiation effects on the amplifier performance can be mitigated following adequate strategies to conceive the REDFA architecture. The tool set was validated by comparing the calculated Erbium-doped fiber amplifier (EDFA) gain degradation under X-rays at ˜300 krad(SiO2) with the corresponding experimental results. Two versions of the same fibers were used in this work, a standard optical fiber and a radiation hardened fiber, obtained by loading the previous fiber with hydrogen gas. Based on these fibers, standard and radiation hardened EDFAs were manufactured and tested in different operating configurations, and the obtained data were compared with simulation data done considering the same EDFA structure and fiber properties. This comparison reveals a good agreement between simulated gain and experimental data (<10% as the maximum error for the highest doses). Compared to our previous results obtained on Er/Yb-amplifiers, these results reveal the importance of the photo-bleaching mechanism competing with the RIA that cannot be neglected for the modeling of the radiation-induced gain degradation of EDFAs. This implies to measure in representative conditions the RIA at the pump and signal wavelengths that are used as input parameters for the simulation. The validated numerical codes have then been used to evaluate the potential of some EDFA architecture evolutions in the amplifier performance during the space mission. Optimization of both the fiber length and the EDFA pumping scheme allows us to strongly reduce its radiation vulnerability in terms of gain. The presented approach is a complementary and effective tool for hardening by device techniques and opens new perspectives for the applications of REDFAs and lasers in harsh environments.

  17. Simulation of a Novel Single-column Cryogenic Air Separation Process Using LNG Cold Energy

    NASA Astrophysics Data System (ADS)

    Jieyu, Zheng; Yanzhong, Li; Guangpeng, Li; Biao, Si

    In this paper, a novel single-column air separation process is proposed with the implementation of heat pump technique and introduction of LNG coldenergy. The proposed process is verifiedand optimized through simulation on the Aspen Hysys® platform. Simulation results reveal that thepower consumption per unit mass of liquid productis around 0.218 kWh/kg, and the total exergy efficiency of the systemis 0.575. According to the latest literatures, an energy saving of 39.1% is achieved compared with those using conventional double-column air separation units.The introduction of LNG cold energy is an effective way to increase the system efficiency.

  18. Design and analysis of metal-dielectric nonpolarizing beam splitters in a glass cube.

    PubMed

    Shi, Jin Hui; Guan, Chun Ying; Wang, Zheng Ping

    2009-06-20

    A novel design of a 25-layer metal-dielectric nonpolarizing beam splitter in a cube is proposed by use of the optimization method and is theoretically investigated. The simulations of the reflectance and differential phases induced by reflection and transmission are presented. The simulation results reveal that both the amplitude and the phase characteristics of the nonpolarizing beam splitter could realize the design targets, the differences between the simulated and the target reflectance of 50% are less than 2%, and the differential phases are less than 3 degrees in the range of 530 nm-570 nm for both p and s components.

  19. Simulations of nanocrystals under pressure: combining electronic enthalpy and linear-scaling density-functional theory.

    PubMed

    Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  20. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    NASA Astrophysics Data System (ADS)

    Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.

    2013-08-01

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  1. Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.

    PubMed

    Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A

    2016-05-01

    A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Optimal design of an electro-hydraulic valve for heavy-duty vehicle clutch actuator with certain constraints

    NASA Astrophysics Data System (ADS)

    Meng, Fei; Shi, Peng; Karimi, Hamid Reza; Zhang, Hui

    2016-02-01

    The main objective of this paper is to investigate the sensitivity analysis and optimal design of a proportional solenoid valve (PSV) operated pressure reducing valve (PRV) for heavy-duty automatic transmission clutch actuators. The nonlinear electro-hydraulic valve model is developed based on fluid dynamics. In order to implement the sensitivity analysis and optimization for the PRV, the PSV model is validated by comparing the results with data obtained from a real test-bench. The sensitivity of the PSV pressure response with regard to the structural parameters is investigated by using Sobol's method. Finally, simulations and experimental investigations are performed on the optimized prototype and the results reveal that the dynamical characteristics of the valve have been improved in comparison with the original valve.

  3. Superlattice structure modeling and simulation of High Electron Mobility Transistor for improved performance

    NASA Astrophysics Data System (ADS)

    Munusami, Ravindiran; Yakkala, Bhaskar Rao; Prabhakar, Shankar

    2013-12-01

    Magnetic tunnel junction were made by inserting the magnetic materials between the source, channel and the drain of the High Electron Mobility Transistor (HEMT) to enhance the performance. Material studio software package was used to design the superlattice layers. Different cases were analyzed to optimize the performance of the device by placing the magnetic material at different positions of the device. Simulation results based on conductivity reveals that the device has a very good electron transport due to the magnetic materials and will amplify very low frequency signals.

  4. Design-Based Comparison of Spine Surgery Simulators: Optimizing Educational Features of Surgical Simulators.

    PubMed

    Ryu, Won Hyung A; Mostafa, Ahmed E; Dharampal, Navjit; Sharlin, Ehud; Kopp, Gail; Jacobs, W Bradley; Hurlbert, R John; Chan, Sonny; Sutherland, Garnette R

    2017-10-01

    Simulation-based education has made its entry into surgical residency training, particularly as an adjunct to hands-on clinical experience. However, one of the ongoing challenges to wide adoption is the capacity of simulators to incorporate educational features required for effective learning. The aim of this study was to identify strengths and limitations of spine simulators to characterize design elements that are essential in enhancing resident education. We performed a mixed qualitative and quantitative cohort study with a focused survey and interviews of stakeholders in spine surgery pertaining to their experiences on 3 spine simulators. Ten participants were recruited spanning all levels of training and expertise until qualitative analysis reached saturation of themes. Participants were asked to perform lumbar pedicle screw insertion on 3 simulators. Afterward, a 10-item survey was administrated and a focused interview was conducted to explore topics pertaining to the design features of the simulators. Overall impressions of the simulators were positive with regards to their educational benefit, but our qualitative analysis revealed differing strengths and limitations. Main design strengths of the computer-based simulators were incorporation of procedural guidance and provision of performance feedback. The synthetic model excelled in achieving more realistic haptic feedback and incorporating use of actual surgical tools. Stakeholders from trainees to experts acknowledge the growing role of simulation-based education in spine surgery. However, different simulation modalities have varying design elements that augment learning in distinct ways. Characterization of these design characteristics will allow for standardization of simulation curricula in spinal surgery, optimizing educational benefit. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Prediction of optimal safe ground water yield and land subsidence in the Los Banos-Kettleman City area, California, using a calibrated numerical simulation model

    NASA Astrophysics Data System (ADS)

    Larson, K. J.; Başaǧaoǧlu, H.; Mariño, M. A.

    2001-02-01

    Land subsidence caused by the excessive use of ground water resources has traditionally caused serious and costly damage to the Los Banos-Kettleman City area of California's San Joaquin Valley. Although the arrival of surface water from the Central Valley Project has reduced subsidence in recent decades, the growing instability of surface water supplies has refocused attention on the future of land subsidence in the region. This paper uses integrated numerical ground water and land subsidence models to simulate land subsidence caused by ground water overdraft. The simulation model is calibrated using observed data from 1972 to 1998, and the responsiveness of the model to variations in subsidence parameters are analyzed through a sensitivity analysis. A probable future drought scenario is used to evaluate the effect on land subsidence of three management alternatives over the next thirty years. The model reveals that maintaining present practices virtually eliminates unrecoverable land subsidence, but may not be a sustainable alternative because of a growing urban population to the south and concern over the ecological implications of water exportation from the north. The two other proposed management alternatives reduce the dependency on surface water by increasing ground water withdrawal. Land subsidence is confined to tolerable levels in the more moderate of these proposals, while the more aggressive produces significant long-term subsidence. Finally, an optimization model is formulated to determine maximum ground water withdrawal from nine pumping sub-basins without causing irrecoverable subsidence during the forecast period. The optimization model reveals that withdrawal can be increased in certain areas on the eastern side of the study area without causing significant inelastic subsidence.

  6. Simulated transcatheter aortic valve deformation: A parametric study on the impact of leaflet geometry on valve peak stress.

    PubMed

    Li, Kewei; Sun, Wei

    2017-03-01

    In this study, we developed a computational framework to investigate the impact of leaflet geometry of a transcatheter aortic valve (TAV) on the leaflet stress distribution, aiming at optimizing TAV leaflet design to reduce its peak stress. Utilizing a generic TAV model developed previously [Li and Sun, Annals of Biomedical Engineering, 2010. 38(8): 2690-2701], we first parameterized the 2D leaflet geometry by mathematical equations, then by perturbing the parameters of the equations, we could automatically generate a new leaflet design, remesh the 2D leaflet model and build a 3D leaflet model from the 2D design via a Python script. Approximately 500 different leaflet designs were investigated by simulating TAV closure under the nominal circular deployment and physiological loading conditions. From the simulation results, we identified a new leaflet design that could reduce the previously reported valve peak stress by about 5%. The parametric analysis also revealed that increasing the free edge width had the highest overall impact on decreasing the peak stress. A similar computational analysis was further performed for a TAV deployed in an abnormal, asymmetric elliptical configuration. We found that a minimal free edge height of 0.46 mm should be adopted to prevent central backflow leakage. This increase of the free edge height resulted in an increase of the leaflet peak stress. Furthermore, the parametric study revealed a complex response surface for the impact of the leaflet geometric parameters on the peak stress, underscoring the importance of performing a numerical optimization to obtain the optimal TAV leaflet design. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Optimal control of orientation and entanglement for two dipole-dipole coupled quantum planar rotors.

    PubMed

    Yu, Hongling; Ho, Tak-San; Rabitz, Herschel

    2018-05-09

    Optimal control simulations are performed for orientation and entanglement of two dipole-dipole coupled identical quantum rotors. The rotors at various fixed separations lie on a model non-interacting plane with an applied control field. It is shown that optimal control of orientation or entanglement represents two contrasting control scenarios. In particular, the maximally oriented state (MOS) of the two rotors has a zero entanglement entropy and is readily attainable at all rotor separations. Whereas, the contrasting maximally entangled state (MES) has a zero orientation expectation value and is most conveniently attainable at small separations where the dipole-dipole coupling is strong. It is demonstrated that the peak orientation expectation value attained by the MOS at large separations exhibits a long time revival pattern due to the small energy splittings arising form the extremely weak dipole-dipole coupling between the degenerate product states of the two free rotors. Moreover, it is found that the peak entanglement entropy value attained by the MES remains largely unchanged as the two rotors are transported to large separations after turning off the control field. Finally, optimal control simulations of transition dynamics between the MOS and the MES reveal the intricate interplay between orientation and entanglement.

  8. Parameter optimization for the visco-hyperelastic constitutive model of tendon using FEM.

    PubMed

    Tang, C Y; Ng, G Y F; Wang, Z W; Tsui, C P; Zhang, G

    2011-01-01

    Numerous constitutive models describing the mechanical properties of tendons have been proposed during the past few decades. However, few were widely used owing to the lack of implementation in the general finite element (FE) software, and very few systematic studies have been done on selecting the most appropriate parameters for these constitutive laws. In this work, the visco-hyperelastic constitutive model of the tendon implemented through the use of three-parameter Mooney-Rivlin form and sixty-four-parameter Prony series were firstly analyzed using ANSYS FE software. Afterwards, an integrated optimization scheme was developed by coupling two optimization toolboxes (OPTs) of ANSYS and MATLAB for estimating these unknown constitutive parameters of the tendon. Finally, a group of Sprague-Dawley rat tendons was used to execute experimental and numerical simulation investigation. The simulated results showed good agreement with the experimental data. An important finding revealed that too many Maxwell elements was not necessary for assuring accuracy of the model, which is often neglected in most open literatures. Thus, all these proved that the constitutive parameter optimization scheme was reliable and highly efficient. Furthermore, the approach can be extended to study other tendons or ligaments, as well as any visco-hyperelastic solid materials.

  9. Modeling and optimization of proton-conducting solid oxide electrolysis cell: Conversion of CO2 into value-added products

    NASA Astrophysics Data System (ADS)

    Namwong, Lawit; Authayanun, Suthida; Saebea, Dang; Patcharavorachot, Yaneeporn; Arpornwichanop, Amornchai

    2016-11-01

    Proton-conducting solid oxide electrolysis cells (SOEC-H+) are a promising technology that can utilize carbon dioxide to produce syngas. In this work, a detailed electrochemical model was developed to predict the behavior of SOEC-H+ and to prove the assumption that the syngas is produced through a reversible water gas-shift (RWGS) reaction. The simulation results obtained from the model, which took into account all of the cell voltage losses (i.e., ohmic, activation, and concentration losses), were validated using experimental data to evaluate the unknown parameters. The developed model was employed to examine the structural and operational parameters. It is found that the cathode-supported SOEC-H+ is the best configuration because it requires the lowest cell potential. SOEC-H+ operated favorably at high temperatures and low pressures. Furthermore, the simulation results revealed that the optimal S/C molar ratio for syngas production, which can be used for methanol synthesis, is approximately 3.9 (at a constant temperature and pressure). The SOEC-H+ was optimized using a response surface methodology, which was used to determine the optimal operating conditions to minimize the cell potential and maximize the carbon dioxide flow rate.

  10. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  11. Recycling production designs: the value of coordination and flexibility in aluminum recycling operations

    NASA Astrophysics Data System (ADS)

    Brommer, Tracey H.

    The growing motivation for aluminum recycling has prompted interest in recycling alternative and more challenging secondary materials. The nature of these alternative secondary materials necessitates the development of an intermediate recycling facility that can reprocess the secondary materials into a liquid product Two downstream aluminum remelters will incorporate the liquid products into their aluminum alloy production schedules. Energy and environmental benefits result from delivering the products as liquid but coordination challenges persist because of the energy cost to maintain the liquid. Further coordination challenges result from the necessity to establish a long term recycling production plan in the presence of long term downstream aluminum remelter production uncertainty and inherent variation in the daily order schedule of the downstream aluminum remelters. In this context a fundamental question arises, considering the metallurgical complexities of dross reprocessing, what is the value of operating a coordinated set of by-product reprocessing plants and remelting cast houses? A methodology is presented to calculate the optimal recycling center production parameters including 1) the number of recycled products, 2) the volume of recycled products, 3) allocation of recycled materials across recycled products, 4) allocation of recycled products across finished alloys, 4) the level of flexibility for the recycling center to operate. The methods implemented include, 1) an optimization model to describe the long term operations of the recycling center, 2) an uncertainty simulation tool, 3) a simulation optimization method, 4) a dynamic simulation tool with four embedded daily production optimization models of varying degrees of flexibility. This methodology is used to quantify the performance of several recycling center production designs of varying levels of coordination and flexibility. This analysis allowed the identification of the optimal recycling center production design based on maximizing liquid recycled product incorporation and minimizing cast sows. The long term production optimization model was used to evaluate the theoretical viability of the proposed two stage scrap and aluminum dross reprocessing operation including the impact of reducing coordination on model performance. Reducing the coordination between the recycling center and downstream remelters by reducing the number of recycled products from ten to five resulted in only 1.3% less secondary materials incorporated into downstream production. The dynamic simulation tool was used to evaluate the performance of the calculated recycling center production plan when resolved on a daily timeframe for varying levels of operational flexibility. The dynamic simulation revealed the optimal performance corresponded to the fixed recipe with flexible production daily optimization model formulation. Calculating recycled product characteristics using the proposed simulation optimization method increased profitability in cases of uncertain downstream remelter production and expensive aluminum dross and post-consumed secondary materials. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  12. Flow Simulation of N2B Hybrid Wing Body Configuration

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungjin; Liou, Meng-Sing

    2012-01-01

    The N2B hybrid wing body aircraft was conceptually designed to meet environmental and performance goals for the N+2 generation transport set by the subsonic fixed wing project. In this study, flow fields around the N2B configuration is simulated using a Reynolds-averaged Navier-Stokes flow solver using unstructured meshes. Boundary conditions at engine fan face and nozzle exhaust planes are provided by response surfaces of the NPSS thermodynamic engine cycle model. The present flow simulations reveal challenging design issues arising from boundary layer ingestion offset inlet and nacelle-airframe interference. The N2B configuration can be a good test bed for application of multidisciplinary design optimization technology.

  13. Revealing the Effects of Nanoscale Membrane Curvature on Lipid Mobility.

    PubMed

    Kabbani, Abir Maarouf; Woodward, Xinxin; Kelly, Christopher V

    2017-10-18

    Recent advances in nanoengineering and super-resolution microscopy have enabled new capabilities for creating and observing membrane curvature. However, the effects of curvature on single-lipid diffusion have yet to be revealed. The simulations presented here describe the capabilities of varying experimental methods for revealing the effects of nanoscale curvature on single-molecule mobility. Traditionally, lipid mobility is revealed through fluorescence recovery after photobleaching (FRAP), fluorescence correlation spectroscopy (FCS), and single particle tracking (SPT). However, these techniques vary greatly in their ability to detect the effects of nanoscale curvature on lipid behavior. Traditionally, FRAP and FCS depend on diffraction-limited illumination and detection. A simulation of FRAP shows minimal effects on lipids diffusion due to a 50 nm radius membrane bud. Throughout the stages of the budding process, FRAP detected minimal changes in lipid recovery time due to the curvature versus flat membrane. Simulated FCS demonstrated small effects due to a 50 nm radius membrane bud that was more apparent with curvature-dependent lipid mobility changes. However, SPT achieves a sub-diffraction-limited resolution of membrane budding and lipid mobility through the identification of the single-lipid positions with ≤15 nm spatial and ≤20 ms temporal resolution. By mapping the single-lipid step lengths to locations on the membrane, the effects of membrane topography and curvature could be correlated to the effective membrane viscosity. Single-fluorophore localization techniques, such SPT, can detect membrane curvature and its effects on lipid behavior. These simulations and discussion provide a guideline for optimizing the experimental procedures in revealing the effects of curvature on lipid mobility and effective local membrane viscosity.

  14. Dynamic analysis and optimal control for a model of hepatitis C with treatment

    NASA Astrophysics Data System (ADS)

    Zhang, Suxia; Xu, Xiaxia

    2017-05-01

    A model for hepatitis C is formulated to study the effects of treatment and public concern on HCV transmission dynamics. The stability of equilibria and persistence of the model are analyzed, and an optimal control measure is performed to prevent the spread of HCV with minimal infected individuals and cost. The dynamical analysis reveals that the disease-free equilibrium of the model is asymptotically stable if the basic reproductive number R0 is less than unity. On the other hand, if R0 > 1 , the disease is uniformly persistent. Numerical simulations are conducted to investigate the influence of different vital parameters on R0. For the corresponding optimality system, the optimal solution is discussed by Pontryagin Maximum Principle, and the comparisons of model-predicted consequences with control or not are presented.

  15. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    NASA Astrophysics Data System (ADS)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  16. Surface temperature dataset for North America obtained by application of optimal interpolation algorithm merging tree-ring chronologies and climate model output

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2017-02-01

    A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.

  17. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  18. Improving Simulated Annealing by Recasting it as a Non-Cooperative Game

    NASA Technical Reports Server (NTRS)

    Wolpert, David; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.

  19. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  20. Parameterization of Ca+2-protein interactions for molecular dynamics simulations.

    PubMed

    Project, Elad; Nachliel, Esther; Gutman, Menachem

    2008-05-01

    Molecular dynamics simulations of Ca+2 ions near protein were performed with three force fields: GROMOS96, OPLS-AA, and CHARMM22. The simulations reveal major, force-field dependent, inconsistencies in the interaction between the Ca+2 ions with the protein. The variations are attributed to the nonbonded parameterizations of the Ca+2-carboxylates interactions. The simulations results were compared to experimental data, using the Ca+2-HCOO- equilibrium as a model. The OPLS-AA force field grossly overestimates the binding affinity of the Ca+2 ions to the carboxylate whereas the GROMOS96 and CHARMM22 force fields underestimate the stability of the complex. Optimization of the Lennard-Jones parameters for the Ca+2-carboxylate interactions were carried out, yielding new parameters which reproduce experimental data. Copyright 2007 Wiley Periodicals, Inc.

  1. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  2. Solubilization of Therapeutic Agents in Micellar Nanomedicines

    PubMed Central

    Vuković, Lela; Madriaga, Antonett; Kuzmis, Antonina; Banerjee, Amrita; Tang, Alan; Tao, Kevin; Shah, Neil; Král, Petr; Onyuksel, Hayat

    2014-01-01

    We use atomistic molecular dynamics simulations to reveal the binding mechanisms of therapeutic agents in PEG-ylated micellar nanocarriers (SSM). In our experiments, SSM in buffer solutions can solubilize either ≈ 11 small bexarotene molecules or ≈ 6 (2 in low ionic strength buffer) human vasoactive intestinal peptide (VIP) molecules. Free energy calculations reveal that molecules of the poorly water soluble drug bexarotene can reside at the micellar ionic interface of the PEG corona, with their polar ends pointing out. Alternatively, they can reside in the alkane core center, where several bexarotene molecules can self-stabilize by forming a cluster held together by a network of hydrogen bonds. We also show that highly charged molecules, such as VIP, can be stabilized at the SSM ionic interface by Coulombic coupling between their positively charged residues and the negatively charged phosphate head-groups of the lipids. The obtained results illustrate that atomistic simulations can reveal drug solubilization character in nanocarriers and be used in efficient optimization of novel nanomedicines. PMID:24283508

  3. Optimizing isotope substitution in graphene for thermal conductivity minimization by genetic algorithm driven molecular simulations

    NASA Astrophysics Data System (ADS)

    Davies, Michael; Ganapathysubramanian, Baskar; Balasubramanian, Ganesh

    2017-03-01

    We present results from a computational framework integrating genetic algorithm and molecular dynamics simulations to systematically design isotope engineered graphene structures for reduced thermal conductivity. In addition to the effect of mass disorder, our results reveal the importance of atomic distribution on thermal conductivity for the same isotopic concentration. Distinct groups of isotope-substituted graphene sheets are identified based on the atomic composition and distribution. Our results show that in structures with equiatomic compositions, the enhanced scattering by lattice vibrations results in lower thermal conductivities due to the absence of isotopic clusters.

  4. Perceptual control models of pursuit manual tracking demonstrate individual specificity and parameter consistency.

    PubMed

    Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren

    2017-11-01

    Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.

  5. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  6. Local performance optimization for a class of redundant eight-degree-of-freedom manipulators

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1994-01-01

    Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.

  7. The combination of simulation and response methodology and its application in an aggregate production plan

    NASA Astrophysics Data System (ADS)

    Chen, Zhiming; Feng, Yuncheng

    1988-08-01

    This paper describes an algorithmic structure for combining simulation and optimization techniques both in theory and practice. Response surface methodology is used to optimize the decision variables in the simulation environment. A simulation-optimization software has been developed and successfully implemented, and its application to an aggregate production planning simulation-optimization model is reported. The model's objective is to minimize the production cost and to generate an optimal production plan and inventory control strategy for an aircraft factory.

  8. An opinion formation based binary optimization approach for feature selection

    NASA Astrophysics Data System (ADS)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  9. Optimization, in vitro release and bioavailability of gamma-oryzanol-loaded calcium pectinate microparticles reinforced with chitosan.

    PubMed

    Kim, Jong Soo; Lee, Ji-Soo; Chang, Pahn-Shick; Lee, Hyeon Gyu

    2010-09-30

    Response surface methodology was used to optimize coating conditions, including chitosan concentration (X(1)) and coating time (X(2)), for sustained release of chitosan-coated Ca-pectinate (CP) microparticles containing oryzanol (OZ). The optimized values of X(1) and X(2) were found to be 1.48% and 69.92 min, respectively. These optimized values agreed favorably with the predicted results, indicating the utility of predictive models for the release of OZ in simulated intestinal fluid. In vitro release studies revealed that the chitosan-coated CP microparticles were quite stable under acidic conditions, but swell and disintegrate under alkaline conditions. In vivo release study of OZ, physically entrapped within chitosan-coated CP microcapsules, demonstrated the sustained release of OZ and could be used to improve the bioavailability of OZ following oral administration. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters

    NASA Astrophysics Data System (ADS)

    Ashourloo, Mojtaba

    This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.

  12. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  13. Protein-ligand docking using fitness learning-based artificial bee colony with proximity stimuli.

    PubMed

    Uehara, Shota; Fujimoto, Kazuhiro J; Tanaka, Shigenori

    2015-07-07

    Protein-ligand docking is an optimization problem, which aims to identify the binding pose of a ligand with the lowest energy in the active site of a target protein. In this study, we employed a novel optimization algorithm called fitness learning-based artificial bee colony with proximity stimuli (FlABCps) for docking. Simulation results revealed that FlABCps improved the success rate of docking, compared to four state-of-the-art algorithms. The present results also showed superior docking performance of FlABCps, in particular for dealing with highly flexible ligands and proteins with a wide and shallow binding pocket.

  14. Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen

    2013-08-01

    Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.

  15. Revealing the Effects of Nanoscale Membrane Curvature on Lipid Mobility

    PubMed Central

    Kabbani, Abir Maarouf; Woodward, Xinxin

    2017-01-01

    Recent advances in nanoengineering and super-resolution microscopy have enabled new capabilities for creating and observing membrane curvature. However, the effects of curvature on single-lipid diffusion have yet to be revealed. The simulations presented here describe the capabilities of varying experimental methods for revealing the effects of nanoscale curvature on single-molecule mobility. Traditionally, lipid mobility is revealed through fluorescence recovery after photobleaching (FRAP), fluorescence correlation spectroscopy (FCS), and single particle tracking (SPT). However, these techniques vary greatly in their ability to detect the effects of nanoscale curvature on lipid behavior. Traditionally, FRAP and FCS depend on diffraction-limited illumination and detection. A simulation of FRAP shows minimal effects on lipids diffusion due to a 50 nm radius membrane bud. Throughout the stages of the budding process, FRAP detected minimal changes in lipid recovery time due to the curvature versus flat membrane. Simulated FCS demonstrated small effects due to a 50 nm radius membrane bud that was more apparent with curvature-dependent lipid mobility changes. However, SPT achieves a sub-diffraction-limited resolution of membrane budding and lipid mobility through the identification of the single-lipid positions with ≤15 nm spatial and ≤20 ms temporal resolution. By mapping the single-lipid step lengths to locations on the membrane, the effects of membrane topography and curvature could be correlated to the effective membrane viscosity. Single-fluorophore localization techniques, such SPT, can detect membrane curvature and its effects on lipid behavior. These simulations and discussion provide a guideline for optimizing the experimental procedures in revealing the effects of curvature on lipid mobility and effective local membrane viscosity. PMID:29057801

  16. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  17. Evacuation dynamic and exit optimization of a supermarket based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Lin; Yu, Zhonghai; Chen, Yang

    2014-12-01

    A modified particle swarm optimization algorithm is proposed in this paper to investigate the dynamic of pedestrian evacuation from a fire in a public building-a supermarket with multiple exits and configurations of counters. Two distinctive evacuation behaviours featured by the shortest-path strategy and the following-up strategy are simulated in the model, accounting for different categories of age and sex of the pedestrians along with the impact of the fire, including gases, heat and smoke. To examine the relationship among the progress of the overall evacuation and the layout and configuration of the site, a series of simulations are conducted in various settings: without a fire and with a fire at different locations. Those experiments reveal a general pattern of two-phase evacuation, i.e., a steep section and a flat section, in addition to the impact of the presence of multiple exits on the evacuation along with the geographic locations of the exits. For the study site, our simulations indicated the deficiency of the configuration and the current layout of this site in the process of evacuation and verified the availability of proposed solutions to resolve the deficiency. More specifically, for improvement of the effectiveness of the evacuation from the site, adding an exit between Exit 6 and Exit 7 and expanding the corridor at the right side of Exit 7 would significantly reduce the evacuation time.

  18. USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation

    DTIC Science & Technology

    2016-09-01

    release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However

  19. Multi-wavelength metal-dielectric nonpolarizing beam splitters in the near-infrared range

    NASA Astrophysics Data System (ADS)

    Hui Shi, Jin; Ping Wang, Zheng; Ying Guan, Chun; Yang, Jun; Shu Fu, Tian

    2011-04-01

    A 21-layer multi-wavelength metal-dielectric nonpolarizing cube beam splitter was designed by use of an optimization method and theoretically investigated in the near-infrared range. The angular dependence of the reflectance and differential phases induced by reflection and transmission were presented. The simulation results revealed that the non-polarizing effect could be achieved for both the amplitude and phase characteristics at 1310 and 1550 nm. The differences between the simulated and the target reflectance of 50% are less than 2% and differential phases are less than 5°in the range 1300-1320 nm and 1540-1550 nm for both p- and s-components.

  20. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  1. Modelling hydrology of a single bioretention system with HYDRUS-1D.

    PubMed

    Meng, Yingying; Wang, Huixiao; Chen, Jiangang; Zhang, Shuhan

    2014-01-01

    A study was carried out on the effectiveness of bioretention systems to abate stormwater using computer simulation. The hydrologic performance was simulated for two bioretention cells using HYDRUS-1D, and the simulation results were verified by field data of nearly four years. Using the validated model, the optimization of design parameters of rainfall return period, filter media depth and type, and surface area was discussed. And the annual hydrologic performance of bioretention systems was further analyzed under the optimized parameters. The study reveals that bioretention systems with underdrains and impervious boundaries do have some detention capability, while their total water retention capability is extremely limited. Better detention capability is noted for smaller rainfall events, deeper filter media, and design storms with a return period smaller than 2 years, and a cost-effective filter media depth is recommended in bioretention design. Better hydrologic effectiveness is achieved with a higher hydraulic conductivity and ratio of the bioretention surface area to the catchment area, and filter media whose conductivity is between the conductivity of loamy sand and sandy loam, and a surface area of 10% of the catchment area is recommended. In the long-term simulation, both infiltration volume and evapotranspiration are critical for the total rainfall treatment in bioretention systems.

  2. Diversity of nursing student views about simulation design: a q-methodological study.

    PubMed

    Paige, Jane B; Morin, Karen H

    2015-05-01

    Education of future nurses benefits from well-designed simulation activities. Skillful teaching with simulation requires educators to be constantly aware of how students experience learning and perceive educators' actions. Because revision of simulation activities considers feedback elicited from students, it is crucial to understand the perspective from which students base their response. In a Q-methodological approach, 45 nursing students rank-ordered 60 opinion statements about simulation design into a distribution grid. Factor analysis revealed that nursing students hold five distinct and uniquely personal perspectives-Let Me Show You, Stand By Me, The Agony of Defeat, Let Me Think It Through, and I'm Engaging and So Should You. Results suggest that nurse educators need to reaffirm that students clearly understand the purpose of each simulation activity. Nurse educators should incorporate presimulation assignments to optimize learning and help allay anxiety. The five perspectives discovered in this study can serve as a tool to discern individual students' learning needs. Copyright 2015, SLACK Incorporated.

  3. Development of the vertical Bridgman technique for 6-inch diameter c-axis sapphire growth supported by numerical simulation

    NASA Astrophysics Data System (ADS)

    Miyagawa, Chihiro; Kobayashi, Takumi; Taishi, Toshinori; Hoshikawa, Keigo

    2014-09-01

    Based on the growth of 3-inch diameter c-axis sapphire using the vertical Bridgman (VB) technique, numerical simulations were made and used to guide the growth of a 6-inch diameter sapphire. A 2D model of the VB hot-zone was constructed, the seeding interface shape of the 3-inch diameter sapphire as revealed by green laser scattering was estimated numerically, and the temperature distributions of two VB hot-zone models designed for 6-inch diameter sapphire growth were numerically simulated to achieve the optimal growth of large crystals. The hot-zone model with one heater was selected and prepared, and 6-inch diameter c-axis sapphire boules were actually grown, as predicted by the numerical results.

  4. Numerical simulation of vortex pyrolysis reactors for condensable tar production from biomass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, R.S.; Bellan, J.

    1998-08-01

    A numerical study is performed in order to evaluate the performance and optimal operating conditions of vortex pyrolysis reactors used for condensable tar production from biomass. A detailed mathematical model of porous biomass particle pyrolysis is coupled with a compressible Reynolds stress transport model for the turbulent reactor swirling flow. An initial evaluation of particle dimensionality effects is made through comparisons of single- (1D) and multi-dimensional particle simulations and reveals that the 1D particle model results in conservative estimates for total pyrolysis conversion times and tar collection. The observed deviations are due predominantly to geometry effects while directional effects frommore » thermal conductivity and permeability variations are relatively small. Rapid ablative particle heating rates are attributed to a mechanical fragmentation of the biomass particles that is modeled using a critical porosity for matrix breakup. Optimal thermal conditions for tar production are observed for 900 K. Effects of biomass identity, particle size distribution, and reactor geometry and scale are discussed.« less

  5. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  6. Characterization of compression behaviors of fully covered biodegradable polydioxanone biliary stent for human body: A numerical approach by finite element model.

    PubMed

    Liu, Yanhui; Zhang, Peihua

    2016-09-01

    This paper presents a study of the compression behaviors of fully covered biodegradable polydioxanone biliary stents (FCBPBs) developed for human body by finite element method. To investigate the relationship between the compression force and structure parameter (monofilament diameter and braid-pin number), nine numerical models based on actual biliary stent were established, the simulation and experimental results are in good agreement with each other when calculating the compression force derived from both experiment and simulation results, indicating that the simulation results can be provided a useful reference to the investigation of biliary stents. The stress distribution on FCBPBSs was studied to optimize the structure of FCBPBSs. In addition, the plastic dissipation analysis and plastic strain of FCBPBSs were obtained via the compression simulation, revealing the structure parameter effect on the tolerance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Real-time management of an urban groundwater well field threatened by pollution.

    PubMed

    Bauser, Gero; Franssen, Harrie-Jan Hendricks; Kaiser, Hans-Peter; Kuhlmann, Ulrich; Stauffer, Fritz; Kinzelbach, Wolfgang

    2010-09-01

    We present an optimal real-time control approach for the management of drinking water well fields. The methodology is applied to the Hardhof field in the city of Zurich, Switzerland, which is threatened by diffuse pollution. The risk of attracting pollutants is higher if the pumping rate is increased and can be reduced by increasing artificial recharge (AR) or by adaptive allocation of the AR. The method was first tested in offline simulations with a three-dimensional finite element variably saturated subsurface flow model for the period January 2004-August 2005. The simulations revealed that (1) optimal control results were more effective than the historical control results and (2) the spatial distribution of AR should be different from the historical one. Next, the methodology was extended to a real-time control method based on the Ensemble Kalman Filter method, using 87 online groundwater head measurements, and tested at the site. The real-time control of the well field resulted in a decrease of the electrical conductivity of the water at critical measurement points which indicates a reduced inflow of water originating from contaminated sites. It can be concluded that the simulation and the application confirm the feasibility of the real-time control concept.

  8. Closing loop base pairs in RNA loop-loop complexes: structural behavior, interaction energy and solvation analysis through molecular dynamics simulations.

    PubMed

    Golebiowski, Jérôme; Antonczak, Serge; Fernandez-Carmona, Juan; Condom, Roger; Cabrol-Bass, Daniel

    2004-12-01

    Nanosecond molecular dynamics using the Ewald summation method have been performed to elucidate the structural and energetic role of the closing base pair in loop-loop RNA duplexes neutralized by Mg2+ counterions in aqueous phases. Mismatches GA, CU and Watson-Crick GC base pairs have been considered for closing the loop of an RNA in complementary interaction with HIV-1 TAR. The simulations reveal that the mismatch GA base, mediated by a water molecule, leads to a complex that presents the best compromise between flexibility and energetic contributions. The mismatch CU base pair, in spite of the presence of an inserted water molecule, is too short to achieve a tight interaction at the closing-loop junction and seems to force TAR to reorganize upon binding. An energetic analysis has allowed us to quantify the strength of the interactions of the closing and the loop-loop pairs throughout the simulations. Although the water-mediated GA closing base pair presents an interaction energy similar to that found on fully geometry-optimized structure, the water-mediated CU closing base pair energy interaction reaches less than half the optimal value.

  9. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  10. Homogeneous Canine Chest Phantom Construction: A Tool for Image Quality Optimization.

    PubMed

    Pavan, Ana Luiza Menegatti; Rosa, Maria Eugênia Dela; Giacomini, Guilherme; Bacchim Neto, Fernando Antonio; Yamashita, Seizo; Vulcano, Luiz Carlos; Duarte, Sergio Barbosa; Miranda, José Ricardo de Arruda; de Pina, Diana Rodrigues

    2016-01-01

    Digital radiographic imaging is increasing in veterinary practice. The use of radiation demands responsibility to maintain high image quality. Low doses are necessary because workers are requested to restrain the animal. Optimizing digital systems is necessary to avoid unnecessary exposure, causing the phenomenon known as dose creep. Homogeneous phantoms are widely used to optimize image quality and dose. We developed an automatic computational methodology to classify and quantify tissues (i.e., lung tissue, adipose tissue, muscle tissue, and bone) in canine chest computed tomography exams. The thickness of each tissue was converted to simulator materials (i.e., Lucite, aluminum, and air). Dogs were separated into groups of 20 animals each according to weight. Mean weights were 6.5 ± 2.0 kg, 15.0 ± 5.0 kg, 32.0 ± 5.5 kg, and 50.0 ± 12.0 kg, for the small, medium, large, and giant groups, respectively. The one-way analysis of variance revealed significant differences in all simulator material thicknesses (p < 0.05) quantified between groups. As a result, four phantoms were constructed for dorsoventral and lateral views. In conclusion, the present methodology allows the development of phantoms of the canine chest and possibly other body regions and/or animals. The proposed phantom is a practical tool that may be employed in future work to optimize veterinary X-ray procedures.

  11. Homogeneous Canine Chest Phantom Construction: A Tool for Image Quality Optimization

    PubMed Central

    2016-01-01

    Digital radiographic imaging is increasing in veterinary practice. The use of radiation demands responsibility to maintain high image quality. Low doses are necessary because workers are requested to restrain the animal. Optimizing digital systems is necessary to avoid unnecessary exposure, causing the phenomenon known as dose creep. Homogeneous phantoms are widely used to optimize image quality and dose. We developed an automatic computational methodology to classify and quantify tissues (i.e., lung tissue, adipose tissue, muscle tissue, and bone) in canine chest computed tomography exams. The thickness of each tissue was converted to simulator materials (i.e., Lucite, aluminum, and air). Dogs were separated into groups of 20 animals each according to weight. Mean weights were 6.5 ± 2.0 kg, 15.0 ± 5.0 kg, 32.0 ± 5.5 kg, and 50.0 ± 12.0 kg, for the small, medium, large, and giant groups, respectively. The one-way analysis of variance revealed significant differences in all simulator material thicknesses (p < 0.05) quantified between groups. As a result, four phantoms were constructed for dorsoventral and lateral views. In conclusion, the present methodology allows the development of phantoms of the canine chest and possibly other body regions and/or animals. The proposed phantom is a practical tool that may be employed in future work to optimize veterinary X-ray procedures. PMID:27101001

  12. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  13. Optimization of wind plant layouts using an adjoint approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N.; Dykes, Katherine; Graf, Peter

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  14. Optimal Spatial Design of Capacity and Quantity of Rainwater Catchment Systems for Urban Flood Mitigation

    NASA Astrophysics Data System (ADS)

    Huang, C.; Hsu, N.

    2013-12-01

    This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.

  15. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  16. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  17. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noe, F; Diadone, Isabella; Lollmann, Marc

    There is a gap between kinetic experiment and simulation in their views of the dynamics of complex biomolecular systems. Whereas experiments typically reveal only a few readily discernible exponential relaxations, simulations often indicate complex multistate behavior. Here, a theoretical framework is presented that reconciles these two approaches. The central concept is dynamical fingerprints which contain peaks at the time scales of the dynamical processes involved with amplitudes determined by the experimental observable. Fingerprints can be generated from both experimental and simulation data, and their comparison by matching peaks permits assignment of structural changes present in the simulation to experimentally observedmore » relaxation processes. The approach is applied here to a test case interpreting single molecule fluorescence correlation spectroscopy experiments on a set of fluorescent peptides with molecular dynamics simulations. The peptides exhibit complex kinetics shown to be consistent with the apparent simplicity of the experimental data. Moreover, the fingerprint approach can be used to design new experiments with site-specific labels that optimally probe specific dynamical processes in the molecule under investigation.« less

  19. Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.

    PubMed

    Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing

    2009-08-21

    Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.

  20. Quantum optimal control pathways of ozone isomerization dynamics subject to competing dissociation: A two-state one-dimensional model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurosaki, Yuzuru, E-mail: kurosaki.yuzuru@jaea.go.jp; Ho, Tak-San, E-mail: tsho@Princeton.EDU; Rabitz, Herschel, E-mail: hrabitz@Princeton.EDU

    We construct a two-state one-dimensional reaction-path model for ozone open → cyclic isomerization dynamics. The model is based on the intrinsic reaction coordinate connecting the cyclic and open isomers with the O{sub 2} + O asymptote on the ground-state {sup 1}A{sup ′} potential energy surface obtained with the high-level ab initio method. Using this two-state model time-dependent wave packet optimal control simulations are carried out. Two possible pathways are identified along with their respective band-limited optimal control fields; for pathway 1 the wave packet initially associated with the open isomer is first pumped into a shallow well on the excitedmore » electronic state potential curve and then driven back to the ground electronic state to form the cyclic isomer, whereas for pathway 2 the corresponding wave packet is excited directly to the primary well of the excited state potential curve. The simulations reveal that the optimal field for pathway 1 produces a final yield of nearly 100% with substantially smaller intensity than that obtained in a previous study [Y. Kurosaki, M. Artamonov, T.-S. Ho, and H. Rabitz, J. Chem. Phys. 131, 044306 (2009)] using a single-state one-dimensional model. Pathway 2, due to its strong coupling to the dissociation channel, is less effective than pathway 1. The simulations also show that nonlinear field effects due to molecular polarizability and hyperpolarizability are small for pathway 1 but could become significant for pathway 2 because much higher field intensity is involved in the latter. The results suggest that a practical control may be feasible with the aid of a few lowly excited electronic states for ozone isomerization.« less

  1. Stochastic Simulation of Dopamine Neuromodulation for Implementation of Fluorescent Neurochemical Probes in the Striatal Extracellular Space.

    PubMed

    Beyene, Abraham G; McFarlane, Ian R; Pinals, Rebecca L; Landry, Markita P

    2017-10-18

    Imaging the dynamic behavior of neuromodulatory neurotransmitters in the extracelluar space that arise from individual quantal release events would constitute a major advance in neurochemical imaging. Spatial and temporal resolution of these highly stochastic neuromodulatory events requires concurrent advances in the chemical development of optical nanosensors selective for neuromodulators in concert with advances in imaging methodologies to capture millisecond neurotransmitter release. Herein, we develop and implement a stochastic model to describe dopamine dynamics in the extracellular space (ECS) of the brain dorsal striatum to guide the design and implementation of fluorescent neurochemical probes that record neurotransmitter dynamics in the ECS. Our model is developed from first-principles and simulates release, diffusion, and reuptake of dopamine in a 3D simulation volume of striatal tissue. We find that in vivo imaging of neuromodulation requires simultaneous optimization of dopamine nanosensor reversibility and sensitivity: dopamine imaging in the striatum or nucleus accumbens requires nanosensors with an optimal dopamine dissociation constant (K d ) of 1 μM, whereas K d s above 10 μM are required for dopamine imaging in the prefrontal cortex. Furthermore, as a result of the probabilistic nature of dopamine terminal activity in the striatum, our model reveals that imaging frame rates of 20 Hz are optimal for recording temporally resolved dopamine release events. Our work provides a modeling platform to probe how complex neuromodulatory processes can be studied with fluorescent nanosensors and enables direct evaluation of nanosensor chemistry and imaging hardware parameters. Our stochastic model is generic for evaluating fluorescent neurotransmission probes, and is broadly applicable to the design of other neurotransmitter fluorophores and their optimization for implementation in vivo.

  2. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.

  3. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  4. ARTICLES: Thermohydrodynamic models of the interaction of pulse-periodic radiation with matter

    NASA Astrophysics Data System (ADS)

    Arutyunyan, R. V.; Baranov, V. Yu; Bol'shov, Leonid A.; Malyuta, D. D.; Mezhevov, V. S.; Pis'mennyĭ, V. D.

    1987-02-01

    Experimental and theoretical investigations were made of the processes of drilling and deep melting of metals by pulsed and pulse-periodic laser radiation. Direct photography of the surface revealed molten metal splashing due to interaction with single CO2 laser pulses. A proposed thermohydrodynamic model was used to account for the experimental results and to calculate the optimal parameters of pulse-periodic radiation needed for deep melting. The melt splashing processes were simulated numerically.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Lu, Jie

    This paper reports several characterization methods of the properties of the uncured woven prepreg during the preforming process. The uniaxial tension, bias-extension, and bending tests are conducted to measure the in-plane properties of the material. The friction tests utilized to reveal the prepreg-prepreg and prepreg-forming tool interactions. All these tests are performed within the temperature range of the real manufacturing process. The results serve as the inputs to the numerical simulation for the product prediction and preforming process parameter optimization.

  6. Device characterization and optimization of small molecule organic solar cells assisted by modelling simulation of the current-voltage characteristics.

    PubMed

    Zuo, Yi; Wan, Xiangjian; Long, Guankui; Kan, Bin; Ni, Wang; Zhang, Hongtao; Chen, Yongsheng

    2015-07-15

    In order to understand the photovoltaic performance differences between the recently reported DR3TBTT-HD and DR3TBDT2T based solar cells, a modified two-diode model with Hecht equation was built to simulate the corresponding current-voltage characteristics. The simulation results reveal that the poor device performance of the DR3TBDTT-HD based device mainly originated from its insufficient charge transport ability, where an average current of 5.79 mA cm(-2) was lost through this pathway at the maximum power point for the DR3TBDTT-HD device, nearly three times as large as that of the DR3TBDT2T based device under the same device fabrication conditions. The morphology studies support these simulation results, in which both Raman and 2D-GIXD data reveal that DR3TBTT-HD based blend films exhibit lower crystallinity. Spin coating at low temperature was used to increase the crystallinity of DR3TBDTT-HD based blend films, and the average current loss through insufficient charge transport at maximum power point was suppressed to 2.08 mA cm(-2). As a result, the average experimental power conversion efficiency of DR3TBDTT-HD based solar cells increased by over 40%.

  7. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  8. Lattice Boltzmann simulation optimization on leading multicore platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, S.; Carter, J.; Oliker, L.

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  9. Optimization in Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2014-01-01

    Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.

  10. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  11. Artificial Neural Network-Based Three-dimensional Continuous Response Relationship Construction of 3Cr20Ni10W2 Heat-Resisting Alloy and Its Application in Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Li, Le; Wang, Li-yong

    2018-04-01

    The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which plays a critical role in process design and optimization. In this investigation, the true stress-strain data of 3Cr20Ni10W2 heat-resisting alloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1203-1403 K and strain rate range of 0.01-10 s-1 on a Gleeble 1500 testing machine. Then the constitutive relationship was modeled by an optimally constructed and well-trained back-propagation artificial neural network (BP-ANN). The evaluation of the BP-ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of 3Cr20Ni10W2 heat-resisting alloy. Meanwhile, a comparison between improved Arrhenius-type constitutive equation and BP-ANN model shows that the latter has higher accuracy. Consequently, the developed BP-ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the three-dimensional continuous response relationship for temperature, strain rate, strain, and stress. Finally, the three-dimensional continuous response relationship was applied to the numerical simulation of isothermal compression tests. The results show that such constitutive relationship can significantly promote the accuracy improvement of numerical simulation for hot forming processes.

  12. Computational replication of the patient-specific stenting procedure for coronary artery bifurcations: From OCT and CT imaging to structural and hemodynamics analyses.

    PubMed

    Chiastra, Claudio; Wu, Wei; Dickerhoff, Benjamin; Aleiou, Ali; Dubini, Gabriele; Otake, Hiromasa; Migliavacca, Francesco; LaDisa, John F

    2016-07-26

    The optimal stenting technique for coronary artery bifurcations is still debated. With additional advances computational simulations can soon be used to compare stent designs or strategies based on verified structural and hemodynamics results in order to identify the optimal solution for each individual's anatomy. In this study, patient-specific simulations of stent deployment were performed for 2 cases to replicate the complete procedure conducted by interventional cardiologists. Subsequent computational fluid dynamics (CFD) analyses were conducted to quantify hemodynamic quantities linked to restenosis. Patient-specific pre-operative models of coronary bifurcations were reconstructed from CT angiography and optical coherence tomography (OCT). Plaque location and composition were estimated from OCT and assigned to models, and structural simulations were performed in Abaqus. Artery geometries after virtual stent expansion of Xience Prime or Nobori stents created in SolidWorks were compared to post-operative geometry from OCT and CT before being extracted and used for CFD simulations in SimVascular. Inflow boundary conditions based on body surface area, and downstream vascular resistances and capacitances were applied at branches to mimic physiology. Artery geometries obtained after virtual expansion were in good agreement with those reconstructed from patient images. Quantitative comparison of the distance between reconstructed and post-stent geometries revealed a maximum difference in area of 20.4%. Adverse indices of wall shear stress were more pronounced for thicker Nobori stents in both patients. These findings verify structural analyses of stent expansion, introduce a workflow to combine software packages for solid and fluid mechanics analysis, and underscore important stent design features from prior idealized studies. The proposed approach may ultimately be useful in determining an optimal choice of stent and position for each patient. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load.

    PubMed

    Feng, Jianyuan; Feng, Zhiyong

    2017-09-11

    Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification.

  14. Optimal laser pulse design for transferring the coherent nuclear wave packet of H+2

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; He, Guang-Qiang; He, Feng

    2014-07-01

    Within the Franck-Condon approximation, the single ionisation of H2 leaves H+2 in a coherent superposition of 19 nuclear vibrational states. We numerically design an optimal laser pulse train to transfer such a coherent nuclear wave packet to the ground vibrational state of H+2. Frequency analysis of the designed optimal pulse reveals that the transfer principle is mainly an anti-Stokes transition, i.e. the H+2 in 1sσg with excited nuclear vibrational states is first pumped to 2pσg state by the pulse at an appropriate time, and then dumped back to 1sσg with lower excited or ground vibrational states. The simulation results show that the population of the ground state after the transfer is more than 91%. To the best of our knowledge, this is the highest transition probability when the driving laser field is dozens of femtoseconds.

  15. Plug-in hybrid electric vehicles in smart grid

    NASA Astrophysics Data System (ADS)

    Yao, Yin

    In this thesis, in order to investigate the impact of charging load from plug-in hybrid electric vehicles (PHEVs), a stochastic model is developed in Matlab. In this model, two main types of PHEVs are defined: public transportation vehicles and private vehicles. Different charging time schedule, charging speed and battery capacity are considered for each type of vehicles. The simulation results reveal that there will be two load peaks (at noon and in evening) when the penetration level of PHEVs increases continuously to 30% in 2030. Therefore, optimization tool is utilized to shift load peaks. This optimization process is based on real time pricing and wind power output data. With the help of smart grid, power allocated to each vehicle could be controlled. As a result, this optimization could fulfill the goal of shifting load peaks to valley areas where real time price is low or wind output is high.

  16. Fabrication of polymer microlenses on single mode optical fibers for light coupling

    NASA Astrophysics Data System (ADS)

    Zaboub, Monsef; Guessoum, Assia; Demagh, Nacer-Eddine; Guermat, Abdelhak

    2016-05-01

    In this paper, we present a technique for producing fibers optics micro-collimators composed of polydimethylsiloxane PDMS microlenses of different radii of curvature. The waist and working distance values obtained enable the optimization of optical coupling between optical fibers, fibers and optical sources, and fibers and detectors. The principal is based on the injection of polydimethylsiloxane (PDMS) into a conical micro-cavity chemically etched at the end of optical fibers. A spherical microlens is then formed that is self-centered with respect to the axis of the fiber. Typically, an optimal radius of curvature of 10.08 μm is obtained. This optimized micro-collimator is characterized by a working distance of 19.27 μm and a waist equal to 2.28 μm for an SMF 9/125 μm fiber. The simulation and experimental results reveal an optical coupling efficiency that can reach a value of 99.75%.

  17. Gamma-oryzanol-loaded calcium pectinate microparticles reinforced with chitosan: optimization and release characteristics.

    PubMed

    Lee, Ji-Soo; Kim, Jong Soo; Lee, Hyeon Gyu

    2009-05-01

    Response surface methodology was used to optimize microparticle preparation conditions, including the ratio of pectin:gamma-oryzanol (OZ) (X(1)), agitation speed (X(2)), and the concentration of emulsifier (X(3)), for maximal entrapment efficiency (EE) of OZ-loaded Ca pectinate microparticles. The optimized values of X(1), X(2), and X(3) were found to be 2.72:5.28, 1143.5 rpm, and 2.61%, respectively. Experimental results obtained for the optimum formulation agreed favorably with the predicted results, indicating the usefulness of predicting models for EE. In order to evaluate the effect of chitosan-coating and blending on the release pattern of the entrapped OZ from microparticles, chitosan-coated and blended Ca pectinate microparticles were prepared. Release studies revealed that the chitosan treatments, especially the chitosan-coating, were effective in suppressing the release in both simulated gastric fluid (SGF) and intestinal fluid (SIF).

  18. Closed loop models for analyzing the effects of simulator characteristics. [digital simulation of human operators

    NASA Technical Reports Server (NTRS)

    Baron, S.; Muralidharan, R.; Kleinman, D. L.

    1978-01-01

    The optimal control model of the human operator is used to develop closed loop models for analyzing the effects of (digital) simulator characteristics on predicted performance and/or workload. Two approaches are considered: the first utilizes a continuous approximation to the discrete simulation in conjunction with the standard optimal control model; the second involves a more exact discrete description of the simulator in a closed loop multirate simulation in which the optimal control model simulates the pilot. Both models predict that simulator characteristics can have significant effects on performance and workload.

  19. An optimal control approach to the design of moving flight simulators

    NASA Technical Reports Server (NTRS)

    Sivan, R.; Ish-Shalom, J.; Huang, J.-K.

    1982-01-01

    An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.

  20. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  1. Parallel Large-Scale Molecular Dynamics Simulation Opens New Perspective to Clarify the Effect of a Porous Structure on the Sintering Process of Ni/YSZ Multiparticles.

    PubMed

    Xu, Jingxiang; Higuchi, Yuji; Ozawa, Nobuki; Sato, Kazuhisa; Hashida, Toshiyuki; Kubo, Momoji

    2017-09-20

    Ni sintering in the Ni/YSZ porous anode of a solid oxide fuel cell changes the porous structure, leading to degradation. Preventing sintering and degradation during operation is a great challenge. Usually, a sintering molecular dynamics (MD) simulation model consisting of two particles on a substrate is used; however, the model cannot reflect the porous structure effect on sintering. In our previous study, a multi-nanoparticle sintering modeling method with tens of thousands of atoms revealed the effect of the particle framework and porosity on sintering. However, the method cannot reveal the effect of the particle size on sintering and the effect of sintering on the change in the porous structure. In the present study, we report a strategy to reveal them in the porous structure by using our multi-nanoparticle modeling method and a parallel large-scale multimillion-atom MD simulator. We used this method to investigate the effect of YSZ particle size and tortuosity on sintering and degradation in the Ni/YSZ anodes. Our parallel large-scale MD simulation showed that the sintering degree decreased as the YSZ particle size decreased. The gas fuel diffusion path, which reflects the overpotential, was blocked by pore coalescence during sintering. The degradation of gas diffusion performance increased as the YSZ particle size increased. Furthermore, the gas diffusion performance was quantified by a tortuosity parameter and an optimal YSZ particle size, which is equal to that of Ni, was found for good diffusion after sintering. These findings cannot be obtained by previous MD sintering studies with tens of thousands of atoms. The present parallel large-scale multimillion-atom MD simulation makes it possible to clarify the effects of the particle size and tortuosity on sintering and degradation.

  2. Theoretical and experimental investigations on the optimal match between compressor and cold finger of the Stirling-type pulse tube cryocooler

    NASA Astrophysics Data System (ADS)

    Dang, Haizheng; Tan, Jun; Zhang, Lei

    2016-06-01

    The match between the pulse tube cold finger (PTCF) and the linear compressor of the Stirling-type pulse tube cryocooler plays a vital role in optimizing the compressor efficiency and in improving the PTCF cooling performance as well. In this paper, the interaction of them has been analyzed in a detailed way to reveal the match mechanism, and systematic investigations on the two-way matching have been conducted. The design method of the PTCF to achieve the optimal matching for the given compressor and the counterpart design method of the compressor to achieve the optimal matching for the given PTCF are put forward. Specific experiments are then carried out to verify the conducted theoretical analyses and modeling. For a given linear compressor, a new in-line PTCF which seeks to achieve the optimal match is simulated, designed and tested. And for a given coaxial PTCF, a new dual-opposed moving-coil linear compressor is also developed to match with it. The simulated and experimental results are compared, and fairly good agreements are found between them in both cases. The matched in-line cooler with the newly-designed PTCF has capacities of 4-11.84 W at 80 K with higher than 17% of Carnot efficiency and the mean motor efficiency of 81.5%, and the matched coaxial cooler with the new-designed compressor can provide 2-5.5 W at 60 K with higher than 9.6% of Carnot efficiency and the mean motor efficiency of 83%, which verify the validity of the theoretical investigations on the optimal match and the proposed design methods.

  3. Effect of temperature on the adsorption of sulfanilamide onto aluminum oxide and its molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ji, Ying-xue; Wang, Feng-he; Duan, Lun-chao; Zhang, Fan; Gong, Xue-dong

    2013-11-01

    The effect of temperature on the adsorption of sulfanilamide (SA) onto aluminum oxide was researched through batch adsorption experiments, and was then simulated using the molecular dynamics (MD) method. The results show that SA can be adsorbed effectively by the adsorbent of aluminum oxide due to their interactions between SA molecule and the surface of aluminum oxide crystal, and temperature is a key factor which influences the adsorption efficiency obviously. The removal ratio of SA at 298 K is the highest among the selected temperatures (293 K, 298 K, 303 K). MD simulations revealed the interactions between SA molecules and (0 1 2) surface of aluminum oxide crystal at molecular level. The SA molecule has clung to the (0 1 2) face of aluminum oxide crystal, and its structure is deformed during its combining process with the surface. Both binding energies (Eb) and deformation energies (ΔEdeform) in the SA-aluminum oxide system follow the same order as: SA-Al2O3 (298 K) > SA-Al2O3 (293 K) > SA-Al2O3 (303 K). Their deformation energies are far less than their non-bonding energies. Analysis of radial distribution functions (RDFs) indicates that SA can be adsorbed effectively by aluminum oxide crystal mainly through non-bond interactions. The simulation results agree well with the experimental results, which verify the rationality and reliability of the MD simulation. The further MD simulations provide theoretically optimal temperature (301 K) for the adsorption of SA onto aluminum oxide. The molecular dynamics simulation will be useful for better understanding the adsorption mechanism of antibiotics onto metal oxides, which will also be helpful for optimizing experimental conditions to improve the adsorptive removal efficiency of antibiotics.

  4. Determining the Influence of Granule Size on Simulation Parameters and Residual Shear Stress Distribution in Tablets by Combining the Finite Element Method into the Design of Experiments.

    PubMed

    Hayashi, Yoshihiro; Kosugi, Atsushi; Miura, Takahiro; Takayama, Kozo; Onuki, Yoshinori

    2018-01-01

    The influence of granule size on simulation parameters and residual shear stress in tablets was determined by combining the finite element method (FEM) into the design of experiments (DoE). Lactose granules were prepared using a wet granulation method with a high-shear mixer and sorted into small and large granules using sieves. To simulate the tableting process using the FEM, parameters simulating each granule were optimized using a DoE and a response surface method (RSM). The compaction behavior of each granule simulated by FEM was in reasonable agreement with the experimental findings. Higher coefficients of friction between powder and die/punch (μ) and lower by internal friction angle (α y ) were generated in the case of small granules, respectively. RSM revealed that die wall force was affected by α y . On the other hand, the pressure transmissibility rate of punches value was affected not only by the α y value, but also by μ. The FEM revealed that the residual shear stress was greater for small granules than for large granules. These results suggest that the inner structure of a tablet comprising small granules was less homogeneous than that comprising large granules. To evaluate the contribution of the simulation parameters to residual stress, these parameters were assigned to the fractional factorial design and an ANOVA was applied. The result indicated that μ was the critical factor influencing residual shear stress. This study demonstrates the importance of combining simulation and statistical analysis to gain a deeper understanding of the tableting process.

  5. Improvement of the Processes of Liquid-Phase Epitaxial Growth of Nanoheteroepitaxial Structures

    NASA Astrophysics Data System (ADS)

    Maronchuk, I. I.; Sanikovich, D. D.; Potapkov, P. V.; Vel‧chenko, A. A.

    2018-05-01

    We have revealed the shortcomings of equipment and technological approaches in growing nanoheteroepitaxial structures with quantum dots by liquid-phase epitaxy. We have developed and fabricated a new vertical barreltype cassette for growing quantum dots and epitaxial layers of various thicknesses in one technological process. A physico-mathematical simulation has been carried out of the processes of liquid-phase epitaxial growth of quantumdimensional structures with the use of the program product SolidWorks (FlowSimulation program). Analysis has revealed the presence of negative factors influencing the growth process of the above structures. The mathematical model has been optimized, and the equipment has been modernized without additional experiments and measurements. The flow dynamics of the process gas in the reactor at various flow rates has been investigated. A method for tuning the thermal equipment has been developed. The calculated and experimental temperature distributions in the process of growing structures with high reproducibility are in good agreement, which confirms the validity of the modernization made.

  6. Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.

  7. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  8. Adaptive grid based multi-objective Cauchy differential evolution for stochastic dynamic economic emission dispatch with wind power uncertainty

    PubMed Central

    Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng

    2017-01-01

    Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262

  9. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  10. Comparison of Flight Simulators Based on Human Motion Perception Metrics

    NASA Technical Reports Server (NTRS)

    Valente Pais, Ana R.; Correia Gracio, Bruno J.; Kelly, Lon C.; Houck, Jacob A.

    2015-01-01

    In flight simulation, motion filters are used to transform aircraft motion into simulator motion. When looking for the best match between visual and inertial amplitude in a simulator, researchers have found that there is a range of inertial amplitudes, rather than a single inertial value, that is perceived by subjects as optimal. This zone, hereafter referred to as the optimal zone, seems to correlate to the perceptual coherence zones measured in flight simulators. However, no studies were found in which these two zones were compared. This study investigates the relation between the optimal and the coherence zone measurements within and between different simulators. Results show that for the sway axis, the optimal zone lies within the lower part of the coherence zone. In addition, it was found that, whereas the width of the coherence zone depends on the visual amplitude and frequency, the width of the optimal zone remains constant.

  11. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  12. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  13. Optimizing Chromatographic Separation: An Experiment Using an HPLC Simulator

    ERIC Educational Resources Information Center

    Shalliker, R. A.; Kayillo, S.; Dennis, G. R.

    2008-01-01

    Optimization of a chromatographic separation within the time constraints of a laboratory session is practically impossible. However, by employing a HPLC simulator, experiments can be designed that allow students to develop an appreciation of the complexities involved in optimization procedures. In the present exercise, a HPLC simulator from "JCE…

  14. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  15. Profit maximization, industry structure, and competition: A critique of neoclassical theory

    NASA Astrophysics Data System (ADS)

    Keen, Steve; Standish, Russell

    2006-10-01

    Neoclassical economics has two theories of competition between profit-maximizing firms-Marshallian and Cournot-Nash-that start from different premises about the degree of strategic interaction between firms, yet reach the same result, that market price falls as the number of firms in an industry increases. The Marshallian argument is strictly false. We integrate the different premises, and establish that the optimal level of strategic interaction between competing firms is zero. Simulations support our analysis and reveal intriguing emergent behaviors.

  16. Complex Systems Simulation and Optimization | Computational Science | NREL

    Science.gov Websites

    account. Stochastic Optimization and Control: Formulation and implementation of advanced optimization and account uncertainty. Contact Wesley Jones Group Manager, Complex Systems Simulation and Optimiziation

  17. Multiobjective evolutionary optimization of water distribution systems: Exploiting diversity with infeasible solutions.

    PubMed

    Tanyimboh, Tiku T; Seyoum, Alemtsehay G

    2016-12-01

    This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Optimal Resonant Band Demodulation Based on an Improved Correlated Kurtosis and Its Application in Bearing Fault Diagnosis

    PubMed Central

    Chen, Xianglong; Zhang, Bingzhi; Feng, Fuzhou; Jiang, Pengcheng

    2017-01-01

    The kurtosis-based indexes are usually used to identify the optimal resonant frequency band. However, kurtosis can only describe the strength of transient impulses, which cannot differentiate impulse noises and repetitive transient impulses cyclically generated in bearing vibration signals. As a result, it may lead to inaccurate results in identifying resonant frequency bands, in demodulating fault features and hence in fault diagnosis. In view of those drawbacks, this manuscript redefines the correlated kurtosis based on kurtosis and auto-correlative function, puts forward an improved correlated kurtosis based on squared envelope spectrum of bearing vibration signals. Meanwhile, this manuscript proposes an optimal resonant band demodulation method, which can adaptively determine the optimal resonant frequency band and accurately demodulate transient fault features of rolling bearings, by combining the complex Morlet wavelet filter and the Particle Swarm Optimization algorithm. Analysis of both simulation data and experimental data reveal that the improved correlated kurtosis can effectively remedy the drawbacks of kurtosis-based indexes and the proposed optimal resonant band demodulation is more accurate in identifying the optimal central frequencies and bandwidth of resonant bands. Improved fault diagnosis results in experiment verified the validity and advantage of the proposed method over the traditional kurtosis-based indexes. PMID:28208820

  19. An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor

    NASA Astrophysics Data System (ADS)

    Do, Q. B.; Choi, H.; Roh, G. H.

    2006-10-01

    This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation

  20. Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David C.; Syamlal, Madhava; Cottrell, Roger

    2013-09-30

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories’ core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI’s industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI’s academic participants (Carnegie Mellon University, Princeton University, West Virginia University, Boston University and the University of Texas at Austin) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 13, CCSI announced the initial release of its first set of computational tools and models during the October 2012 meeting of its Industry Advisory Board. This initial release led to five companies licensing the CCSI Toolset under a Test and Evaluation Agreement this year. By the end of FY13, the CCSI Technical Team had completed development of an updated suite of computational tools and models. The list below summarizes the new and enhanced toolset components that were released following comprehensive testing during October 2013. 1. FOQUS. Framework for Optimization and Quantification of Uncertainty and Sensitivity. Package includes: FOQUS Graphic User Interface (GUI), simulation-based optimization engine, Turbine Client, and heat integration capabilities. There is also an updated simulation interface and new configuration GUI for connecting Aspen Plus or Aspen Custom Modeler (ACM) simulations to FOQUS and the Turbine Science Gateway. 2. A new MFIX-based Computational Fluid Dynamics (CFD) model to predict particle attrition. 3. A new dynamic reduced model (RM) builder, which generates computationally efficient RMs of the behavior of a dynamic system. 4. A completely re-written version of the algebraic surrogate model builder for optimization (ALAMO). The new version is several orders of magnitude faster than the initial release and eliminates the MATLAB dependency. 5. A new suite of high resolution filtered models for the hydrodynamics associated with horizontal cylindrical objects in a flow path. 6. The new Turbine Science Gateway (Cluster), which supports FOQUS for running multiple simulations for optimization or UQ using a local computer or cluster. 7. A new statistical tool (BSS-ANOVA-UQ) for calibration and validation of CFD models. 8. A new basic data submodel in Aspen Plus format for a representative high viscosity capture solvent, 2-MPZ system. 9. An updated RM tool for CFD (REVEAL) that can create a RM from MFIX. A new lightweight, stand-alone version will be available in late 2013. 10. An updated RM integration tool to convert the RM from REVEAL into a CAPE-OPEN or ACM model for use in a process simulator. 11. An updated suite of unified steady-state and dynamic process models for solid sorbent carbon capture included bubbling fluidized bed and moving bed reactors. 12. An updated and unified set of compressor models including steady-state design point model and dynamic model with surge detection. 13. A new framework for the synthesis and optimization of coal oxycombustion power plants using advanced optimization algorithms. This release focuses on modeling and optimization of a cryogenic air separation unit (ASU). 14. A new technical risk model in spreadsheet format. 15. An updated version of the sorbent kinetic/equilibrium model for parameter estimation for the 1st generation sorbent model. 16. An updated process synthesis superstructure model to determine optimal process configurations utilizing surrogate models from ALAMO for adsorption and regeneration in a solid sorbent process. 17. Validation models for NETL Carbon Capture Unit utilizing sorbent AX. Additional validation models will be available for sorbent 32D in 2014. 18. An updated hollow fiber membrane model and system example for carbon capture. 19. An updated reference power plant model in Thermoflex that includes additional steam extraction and reinjection points to enable heat integration module. 20. An updated financial risk model in spreadsheet format.« less

  1. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  2. Optical biosensor optimized for continuous in-line glucose monitoring in animal cell culture.

    PubMed

    Tric, Mircea; Lederle, Mario; Neuner, Lisa; Dolgowjasow, Igor; Wiedemann, Philipp; Wölfl, Stefan; Werner, Tobias

    2017-09-01

    Biosensors for continuous glucose monitoring in bioreactors could provide a valuable tool for optimizing culture conditions in biotechnological applications. We have developed an optical biosensor for long-term continuous glucose monitoring and demonstrated a tight glucose level control during cell culture in disposable bioreactors. The in-line sensor is based on a commercially available oxygen sensor that is coated with cross-linked glucose oxidase (GOD). The dynamic range of the sensor was tuned by a hydrophilic perforated diffusion membrane with an optimized permeability for glucose and oxygen. The biosensor was thoroughly characterized by experimental data and numerical simulations, which enabled insights into the internal concentration profile of the deactivating by-product hydrogen peroxide. The simulations were carried out with a one-dimensional biosensor model and revealed that, in addition to the internal hydrogen peroxide concentration, the turnover rate of the enzyme GOD plays a crucial role for biosensor stability. In the light of this finding, the glucose sensor was optimized to reach a long functional stability (>52 days) under continuous glucose monitoring conditions with a dynamic range of 0-20 mM and a response time of t 90  ≤ 10 min. In addition, we demonstrated that the sensor was sterilizable with beta and UV irradiation and only subjected to minor cross sensitivity to oxygen, when an oxygen reference sensor was applied. Graphical abstract Measuring setup of a glucose biosensor in a shake flask for continuous glucose monitoring in mammalian cell culture.

  3. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  4. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-11-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  5. Maintaining environmental quality while expanding biomass production: Sub-regional U.S. policy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egbendewe-Mondzozo, Aklesso; Swinton, S.; Izaurralde, Roberto C.

    2013-03-01

    This paper evaluates environmental policy effects on ligno-cellulosic biomass production and environ- mental outcomes using an integrated bioeconomic optimization model. The environmental policy integrated climate (EPIC) model is used to simulate crop yields and environmental indicators in current and future potential bioenergy cropping systems based on weather, topographic and soil data. The crop yield and environmental outcome parameters from EPIC are combined with biomass transport costs and economic parameters in a representative farmer profit-maximizing mathematical optimization model. The model is used to predict the impact of alternative policies on biomass production and environmental outcomes. We find that without environmental policy,more » rising biomass prices initially trigger production of annual crop residues, resulting in increased greenhouse gas emissions, soil erosion, and nutrient losses to surface and ground water. At higher biomass prices, perennial bioenergy crops replace annual crop residues as biomass sources, resulting in lower environmental impacts. Simulations of three environmental policies namely a carbon price, a no-till area subsidy, and a fertilizer tax reveal that only the carbon price policy systematically mitigates environmental impacts. The fertilizer tax is ineffectual and too costly to farmers. The no-till subsidy is effective only at low biomass prices and is too costly to government.« less

  6. Discovery and study of novel protein tyrosine phosphatase 1B inhibitors

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Chen, Xi; Feng, Changgen

    2017-10-01

    Protein tyrosine phosphatase 1B (PTP1B) is considered to be a target for therapy of type II diabetes and obesity. So it is of great significance to take advantage of a computer aided drug design protocol involving the structured-based virtual screening with docking simulations for fast searching small molecule PTP1B inhibitors. Based on optimized complex structure of PTP1B bound with specific inhibitor of IX1, structured-based virtual screening against a library of natural products containing 35308 molecules, which was constructed based on Traditional Chinese Medicine database@ Taiwan (TCM database@ Taiwan), was conducted to determine the occurrence of PTP1B inhibitors using the Lubbock module and CDOCKER module from Discovery Studio 3.1 software package. The results were further filtered by predictive ADME simulation and predictive toxic simulation. As a result, 2 good drug-like molecules, namely para-benzoquinone compound 1 and Clavepictine analogue 2 were identified ultimately with the dock score of original inhibitor (IX1) and the receptor as a threshold. Binding model analyses revealed that these two candidate compounds have good interactions with PTP1B. The PTP1B inhibitory activity of compound 2 hasn't been reported before. The optimized compound 2 has higher scores and deserves further study.

  7. Combining Simulation and Optimization Models for Hardwood Lumber Production

    Treesearch

    G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman

    1991-01-01

    Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...

  8. A Simulation of Alternatives for Wholesale Inventory Replenishment

    DTIC Science & Technology

    2016-03-01

    algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item

  9. Fragment-based virtual screening approach and molecular dynamics simulation studies for identification of BACE1 inhibitor leads.

    PubMed

    Manoharan, Prabu; Ghoshal, Nanda

    2018-05-01

    Traditional structure-based virtual screening method to identify drug-like small molecules for BACE1 is so far unsuccessful. Location of BACE1, poor Blood Brain Barrier permeability and P-glycoprotein (Pgp) susceptibility of the inhibitors make it even more difficult. Fragment-based drug design method is suitable for efficient optimization of initial hit molecules for target like BACE1. We have developed a fragment-based virtual screening approach to identify/optimize the fragment molecules as a starting point. This method combines the shape, electrostatic, and pharmacophoric features of known fragment molecules, bound to protein conjugate crystal structure, and aims to identify both chemically and energetically feasible small fragment ligands that bind to BACE1 active site. The two top-ranked fragment hits were subjected for a 53 ns MD simulation. Principle component analysis and free energy landscape analysis reveal that the new ligands show the characteristic features of established BACE1 inhibitors. The potent method employed in this study may serve for the development of potential lead molecules for BACE1-directed Alzheimer's disease therapeutics.

  10. Modeling, simulation and optimization of a no-chamber solid oxide fuel cell operated with a flat-flame burner

    NASA Astrophysics Data System (ADS)

    Vogler, Marcel; Horiuchi, Michio; Bessler, Wolfgang G.

    A detailed computational model of a direct-flame solid oxide fuel cell (DFFC) is presented. The DFFC is based on a fuel-rich methane-air flame stabilized on a flat-flame burner and coupled to a solid oxide fuel cell (SOFC). The model consists of an elementary kinetic description of the premixed methane-air flame, a stagnation-point flow description of the coupled heat and mass transport within the gas phase, an elementary kinetic description of the electrochemistry, as well as heat, mass and charge transport within the SOFC. Simulated current-voltage characteristics show excellent agreement with experimental data published earlier (Kronemayer et al., 2007 [10]). The model-based analysis of loss processes reveals that ohmic resistance in the current collection wires dominates polarization losses, while electronic loss currents in the mixed conducting electrolyte have only little influence on the polarized cell. The model was used to propose an optimized cell design. Based on this analysis, power densities of above 200 mW cm -2 can be expected.

  11. Analyzing climate change impacts on water resources under uncertainty using an integrated simulation-optimization approach

    NASA Astrophysics Data System (ADS)

    Zhuang, X. W.; Li, Y. P.; Nie, S.; Fan, Y. R.; Huang, G. H.

    2018-01-01

    An integrated simulation-optimization (ISO) approach is developed for assessing climate change impacts on water resources. In the ISO, uncertainties presented as both interval numbers and probability distributions can be reflected. Moreover, ISO permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised water-allocation targets are violated. A snowmelt-precipitation-driven watershed (Kaidu watershed) in northwest China is selected as the study case for demonstrating the applicability of the proposed method. Results of meteorological projections disclose that the incremental trend of temperature (e.g., minimum and maximum values) and precipitation exist. Results also reveal that (i) the system uncertainties would significantly affect water resources allocation pattern (including target and shortage); (ii) water shortage would be enhanced from 2016 to 2070; and (iii) the more the inflow amount decreases, the higher estimated water shortage rates are. The ISO method is useful for evaluating climate change impacts within a watershed system with complicated uncertainties and helping identify appropriate water resources management strategies hedging against drought.

  12. Numerical simulation of the helium gas spin-up channel performance of the relativity gyroscope

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.; Edgell, Josephine; Zhang, Burt X.

    1991-01-01

    The dependence of the spin-up system efficiency on each geometrical parameter of the spin-up channel and the exhaust passage of the Gravity Probe-B (GPB) is individually investigated. The spin-up model is coded into a computer program which simulates the spin-up process. Numerical results reveal optimal combinations of the geometrical parameters for the ultimate spin-up performance. Comparisons are also made between the numerical results and experimental data. The experimental leakage rate can only be reached when the gap between the channel lip and the rotor surface increases beyond physical limit. The computed rotating frequency is roughly twice as high as the measured ones although the spin-up torques fairly match.

  13. CFD simulation of copper(II) extraction with TFA in non-dispersive hollow fiber membrane contactors.

    PubMed

    Muhammad, Amir; Younas, Mohammad; Rezakazemi, Mashallah

    2018-04-01

    This study presents computational fluid dynamics (CFD) simulation of dispersion-free liquid-liquid extraction of copper(II) with trifluoroacetylacetone (TFA) in hollow fiber membrane contactor (HFMC). Mass and momentum balance Navier-Stokes equations were coupled to address the transport of copper(II) solute across membrane contactor. Model equations were simulated using COMSOL Multiphysics™. The simulation was run to study the detailed concentration distribution of copper(II) and to investigate the effects of various parameters like membrane characteristics, partition coefficient, and flow configuration on extraction efficiency. Once-through extraction was found to be increased from 10 to 100% when partition coefficient was raised from 1 to 10. Similarly, the extraction efficiency was almost doubled when porosity to tortuosity ratio of membrane was increased from 0.05 to 0.81. Furthermore, the study revealed that CFD can be used as an effective optimization tool for the development of economical membrane-based dispersion-free extraction processes.

  14. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  15. Modeling of solar polygeneration plant

    NASA Astrophysics Data System (ADS)

    Leiva, Roberto; Escobar, Rodrigo; Cardemil, José

    2017-06-01

    In this work, a exergoeconomic analysis of the joint production of electricity, fresh water, cooling and process heat for a simulated concentrated solar power (CSP) based on parabolic trough collector (PTC) with thermal energy storage (TES) and backup energy system (BS), a multi-effect distillation (MED) module, a refrigeration absorption module, and process heat module is carried out. Polygeneration plant is simulated in northern Chile in Crucero with a yearly total DNI of 3,389 kWh/m2/year. The methodology includes designing and modeling a polygeneration plant and applying exergoeconomic evaluations and calculating levelized cost. Solar polygeneration plant is simulated hourly, in a typical meteorological year, for different solar multiple and hour of storage. This study reveals that the total exergy cost rate of products (sum of exergy cost rate of electricity, water, cooling and heat process) is an alternative method to optimize a solar polygeneration plant.

  16. How Perturbing Ocean Floor Disturbs Tsunami Waves

    NASA Astrophysics Data System (ADS)

    Salaree, A.; Okal, E.

    2017-12-01

    Bathymetry maps play, perhaps the most crucial role in optimal tsunami simulations. Regardless of the simulation method, on one hand, it is desirable to include every detailed bathymetry feature in the simulation grids in order to predict tsunami amplitudes as accurately as possible, but on the other hand, large grids result in long simulation times. It is therefore, of interest to investigate a "sufficiency" level - if any - for the amount of details in bathymetry grids needed to reconstruct the most important features in tsunami simulations, as obtained from the actual bathymetry. In this context, we use a spherical harmonics series approach to decompose the bathymetry of the Pacific ocean into its components down to a resolution of 4 degrees (l=100) and create bathymetry grids by accumulating the resulting terms. We then use these grids to simulate the tsunami behavior from pure thrust events around the Pacific through the MOST algorithm (e.g. Titov & Synolakis, 1995; Titov & Synolakis, 1998). Our preliminary results reveal that one would only need to consider the sum of the first 40 coefficients (equivalent to a resolution of 1000 km) to reproduce the main components of the "real" results. This would result in simpler simulations, and potentially allowing for more efficient tsunami warning algorithms.

  17. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  18. GRIFFIN: A versatile methodology for optimization of protein-lipid interfaces for membrane protein simulations

    PubMed Central

    Staritzbichler, René; Anselmi, Claudio; Forrest, Lucy R.; Faraldo-Gómez, José D.

    2014-01-01

    As new atomic structures of membrane proteins are resolved, they reveal increasingly complex transmembrane topologies, and highly irregular surfaces with crevices and pores. In many cases, specific interactions formed with the lipid membrane are functionally crucial, as is the overall lipid composition. Compounded with increasing protein size, these characteristics pose a challenge for the construction of simulation models of membrane proteins in lipid environments; clearly, that these models are sufficiently realistic bears upon the reliability of simulation-based studies of these systems. Here, we introduce GRIFFIN, which uses a versatile framework to automate and improve a widely-used membrane-embedding protocol. Initially, GRIFFIN carves out lipid and water molecules from a volume equivalent to that of the protein, so as to conserve the system density. In the subsequent optimization phase GRIFFIN adds an implicit grid-based protein force-field to a molecular dynamics simulation of the pre-carved membrane. In this force-field, atoms inside the implicit protein volume experience an outward force that will expel them from that volume, whereas those outside are subject to electrostatic and van-der-Waals interactions with the implicit protein. At each step of the simulation, these forces are updated by GRIFFIN and combined with the intermolecular forces of the explicit lipid-water system. This procedure enables the construction of realistic and reproducible starting configurations of the protein-membrane interface within a reasonable timeframe and with minimal intervention. GRIFFIN is a standalone tool designed to work alongside any existing molecular dynamics package, such as NAMD or GROMACS. PMID:24707227

  19. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  20. Noninvasive, automatic optimization strategy in cardiac resynchronization therapy.

    PubMed

    Reumann, Matthias; Osswald, Brigitte; Doessel, Olaf

    2007-07-01

    Optimization of cardiac resynchronization therapy (CRT) is still unsolved. It has been shown that optimal electrode position,atrioventricular (AV) and interventricular (VV) delays improve the success of CRT and reduce the number of non-responders. However, no automatic, noninvasive optimization strategy exists to date. Cardiac resynchronization therapy was simulated on the Visible Man and a patient data-set including fiber orientation and ventricular heterogeneity. A cellular automaton was used for fast computation of ventricular excitation. An AV block and a left bundle branch block were simulated with 100%, 80% and 60% interventricular conduction velocity. A right apical and 12 left ventricular lead positions were set. Sequential optimization and optimization with the downhill simplex algorithm (DSA) were carried out. The minimal error between isochrones of the physiologic excitation and the therapy was computed automatically and leads to an optimal lead position and timing. Up to 1512 simulations were carried out per pathology per patient. One simulation took 4 minutes on an Apple Macintosh 2 GHz PowerPC G5. For each electrode pair an optimal pacemaker delay was found. The DSA reduced the number of simulations by an order of magnitude and the AV-delay and VV - delay were determined with a much higher resolution. The findings are well comparable with clinical studies. The presented computer model of CRT automatically evaluates an optimal lead position and AV-delay and VV-delay, which can be used to noninvasively plan an optimal therapy for an individual patient. The application of the DSA reduces the simulation time so that the strategy is suitable for pre-operative planning in clinical routine. Future work will focus on clinical evaluation of the computer models and integration of patient data for individualized therapy planning and optimization.

  1. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  2. Modelling irrigated maize with a combination of coupled-model simulation and uncertainty analysis, in the northwest of China

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.

    2012-05-01

    The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.

  3. Merging tree ring chronologies and climate system model simulated temperature by optimal interpolation algorithm in North America

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Zhao, Zongci; Nie, Suping; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2015-04-01

    A new dataset of annual mean surface temperature has been constructed over North America in recent 500 years by performing optimal interpolation (OI) algorithm. Totally, 149 series totally were screened out including 69 tree ring width (MXD) and 80 tree ring width (TRW) chronologies are screened from International Tree Ring Data Bank (ITRDB). The simulated annual mean surface temperature derives from the past1000 experiment results of Community Climate System Model version 4 (CCSM4). Different from existing research that applying data assimilation approach to (General Circulation Models) GCMs simulation, the errors of both the climate model simulation and tree ring reconstruction were considered, with a view to combining the two parts in an optimal way. Variance matching (VM) was employed to calibrate tree ring chronologies on CRUTEM4v, and corresponding errors were estimated through leave-one-out process. Background error covariance matrix was estimated from samples of simulation results in a running 30-year window in a statistical way. Actually, the background error covariance matrix was calculated locally within the scanning range (2000km in this research). Thus, the merging process continued with a time-varying local gain matrix. The merging method (MM) was tested by two kinds of experiments, and the results indicated standard deviation of errors can be reduced by about 0.3 degree centigrade lower than tree ring reconstructions and 0.5 degree centigrade lower than model simulation. During the recent Obvious decadal variability can be identified in MM results including the evident cooling (0.10 degree per decade) in 1940-60s, wherein the model simulation exhibit a weak increasing trend (0.05 degree per decade) instead. MM results revealed a compromised spatial pattern of the linear trend of surface temperature during a typical period (1601-1800 AD) in Little Ice Age, which basically accorded with the phase transitions of the Pacific decadal oscillation (PDO) and Atlantic multi-decadal oscillation (AMO). Through the empirical orthogonal functions and power spectrum analysis, it was demonstrated that, compared with the pure simulations of CCSM4, MM made significant improvement of decadal variability for the gridded temperature in North America by merging the temperature-sensitive tree ring records.

  4. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    NASA Astrophysics Data System (ADS)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  5. Study of the dynamics of poly(ethylene oxide) by combining molecular dynamic simulations and neutron scattering experiments

    NASA Astrophysics Data System (ADS)

    Brodeck, M.; Alvarez, F.; Arbe, A.; Juranyi, F.; Unruh, T.; Holderer, O.; Colmenero, J.; Richter, D.

    2009-03-01

    We performed quasielastic neutron scattering experiments and atomistic molecular dynamics simulations on a poly(ethylene oxide) (PEO) homopolymer system above the melting point. The excellent agreement found between both sets of data, together with a successful comparison with literature diffraction results, validates the condensed-phase optimized molecular potentials for atomistic simulation studies (COMPASS) force field used to produce our dynamic runs and gives support to their further analysis. This provided direct information on magnitudes which are not accessible from experiments such as the radial probability distribution functions of specific atoms at different times and their moments. The results of our simulations on the H-motions and different experiments indicate that in the high-temperature range investigated the dynamics is Rouse-like for Q-values below ≈0.6 Å-1. We then addressed the single chain dynamic structure factor with the simulations. A mode analysis, not possible directly experimentally, reveals the limits of applicability of the Rouse model to PEO. We discuss the possible origins for the observed deviations.

  6. Study of the dynamics of poly(ethylene oxide) by combining molecular dynamic simulations and neutron scattering experiments.

    PubMed

    Brodeck, M; Alvarez, F; Arbe, A; Juranyi, F; Unruh, T; Holderer, O; Colmenero, J; Richter, D

    2009-03-07

    We performed quasielastic neutron scattering experiments and atomistic molecular dynamics simulations on a poly(ethylene oxide) (PEO) homopolymer system above the melting point. The excellent agreement found between both sets of data, together with a successful comparison with literature diffraction results, validates the condensed-phase optimized molecular potentials for atomistic simulation studies (COMPASS) force field used to produce our dynamic runs and gives support to their further analysis. This provided direct information on magnitudes which are not accessible from experiments such as the radial probability distribution functions of specific atoms at different times and their moments. The results of our simulations on the H-motions and different experiments indicate that in the high-temperature range investigated the dynamics is Rouse-like for Q-values below approximately 0.6 A(-1). We then addressed the single chain dynamic structure factor with the simulations. A mode analysis, not possible directly experimentally, reveals the limits of applicability of the Rouse model to PEO. We discuss the possible origins for the observed deviations.

  7. Challenges of NDE simulation tool validation, optimization, and utilization for composites

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter

    2016-02-01

    Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.

  8. Optimization of the moving-bed biofilm sequencing batch reactor (MBSBR) to control aeration time by kinetic computational modeling: Simulated sugar-industry wastewater treatment.

    PubMed

    Faridnasr, Maryam; Ghanbari, Bastam; Sassani, Ardavan

    2016-05-01

    A novel approach was applied for optimization of a moving-bed biofilm sequencing batch reactor (MBSBR) to treat sugar-industry wastewater (BOD5=500-2500 and COD=750-3750 mg/L) at 2-4 h of cycle time (CT). Although the experimental data showed that MBSBR reached high BOD5 and COD removal performances, it failed to achieve the standard limits at the mentioned CTs. Thus, optimization of the reactor was rendered by kinetic computational modeling and using statistical error indicator normalized root mean square error (NRMSE). The results of NRMSE revealed that Stover-Kincannon (error=6.40%) and Grau (error=6.15%) models provide better fits to the experimental data and may be used for CT optimization in the reactor. The models predicted required CTs of 4.5, 6.5, 7 and 7.5 h for effluent standardization of 500, 1000, 1500 and 2500 mg/L influent BOD5 concentrations, respectively. Similar pattern of the experimental data also confirmed these findings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  10. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  11. Modeling Filamentous Cyanobacteria Reveals the Advantages of Long and Fast Trichomes for Optimizing Light Exposure

    PubMed Central

    Tamulonis, Carlos; Postma, Marten; Kaandorp, Jaap

    2011-01-01

    Cyanobacteria form a very large and diverse phylum of prokaryotes that perform oxygenic photosynthesis. Many species of cyanobacteria live colonially in long trichomes of hundreds to thousands of cells. Of the filamentous species, many are also motile, gliding along their long axis, and display photomovement, by which a trichome modulates its gliding according to the incident light. The latter has been found to play an important role in guiding the trichomes to optimal lighting conditions, which can either inhibit the cells if the incident light is too weak, or damage the cells if too strong. We have developed a computational model for gliding filamentous photophobic cyanobacteria that allows us to perform simulations on the scale of a Petri dish using over 105 individual trichomes. Using the model, we quantify the effectiveness of one commonly observed photomovement strategy—photophobic responses—in distributing large populations of trichomes optimally over a light field. The model predicts that the typical observed length and gliding speeds of filamentous cyanobacteria are optimal for the photophobic strategy. Therefore, our results suggest that not just photomovement but also the trichome shape itself improves the ability of the cyanobacteria to optimize their light exposure. PMID:21789215

  12. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  14. Optimization of Collision Detection in Surgical Simulations

    NASA Astrophysics Data System (ADS)

    Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu

    2014-11-01

    Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality

  15. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  16. Modeling, simulation, and concept design for hybrid-electric medium-size military trucks

    NASA Astrophysics Data System (ADS)

    Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark

    2005-05-01

    A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.

  17. A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni-Mn-Ga

    NASA Astrophysics Data System (ADS)

    Faran, Eilon; Shilo, Doron

    2016-09-01

    The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni-Mn-Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni-Mn-Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni-Mn-Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.

  18. Design of a correlated validated CFD and genetic algorithm model for optimized sensors placement for indoor air quality monitoring

    NASA Astrophysics Data System (ADS)

    Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza

    2018-02-01

    In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.

  19. Postaudit of optimal conjunctive use policies

    USGS Publications Warehouse

    Nishikawa, Tracy; Martin, Peter; ,

    1998-01-01

    A simulation-optimization model was developed for the optimal management of the city of Santa Barbara's water resources during a drought; however, this model addressed only groundwater flow and not the advective-dispersive, density-dependent transport of seawater. Zero-m freshwater head constraints at the coastal boundary were used as surrogates for the control of seawater intrusion. In this study, the strategies derived from the simulation-optimization model using two surface water supply scenarios are evaluated using a two-dimensional, density-dependent groundwater flow and transport model. Comparisons of simulated chloride mass fractions are made between maintaining the actual pumping policies of the 1987-91 drought and implementing the optimal pumping strategies for each scenario. The results indicate that using 0-m freshwater head constraints allowed no more seawater intrusion than under actual 1987-91 drought conditions and that the simulation-optimization model yields least-cost strategies that deliver more water than under actual drought conditions while controlling seawater intrusion.

  20. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidtlein, CR; Beattie, B; Humm, J

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less

  1. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  2. Post Pareto optimization-A case

    NASA Astrophysics Data System (ADS)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  3. Field development planning using simulated annealing - optimal economic well scheduling and placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckner, B.L.; Xong, X.

    1995-12-31

    A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less

  4. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  5. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  6. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  7. Modeling of optical mirror and electromechanical behavior

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Lu, Chao; Liu, Zishun; Liu, Ai Q.; Zhang, Xu M.

    2001-10-01

    This paper presents finite element (FE) simulation and theoretical analysis of novel MEMS fiber-optical switches actuated by electrostatic attraction. FE simulation for the switches under static and dynamic loading are first carried out to reveal the mechanical characteristics of the minimum or critical switching voltages, the natural frequencies, mode shapes and response under different levels of electrostatic attraction load. To validate the FE simulation results, a theoretical (or analytical) model is then developed for one specific switch, i.e., Plate_40_104. Good agreement is found between the FE simulation and the analytical results. From both FE simulation and theoretical analysis, the critical switching voltage for Plate_40_104 is derived to be 238 V for the switching angel of 12 degree(s). The critical switching on and off times are 431 microsecond(s) and 67 microsecond(s) , respectively. The present study not only develops good FE and analytical models, but also demonstrates step by step a method to simplify a real optical switch structure with reference to the FE simulation results for analytical purpose. With the FE and analytical models, it is easy to obtain any information about the mechanical behaviors of the optical switches, which are helpful in yielding optimized design.

  8. The Optimization of Spacer Engineering for Capacitor-Less DRAM Based on the Dual-Gate Tunneling Transistor.

    PubMed

    Li, Wei; Liu, Hongxia; Wang, Shulong; Chen, Shupeng; Wang, Qianqiong

    2018-03-05

    The DRAM based on the dual-gate tunneling FET (DGTFET) has the advantages of capacitor-less structure and high retention time. In this paper, the optimization of spacer engineering for DGTFET DRAM is systematically investigated by Silvaco-Atlas tool to further improve its performance, including the reduction of reading "0" current and extension of retention time. The simulation results show that spacers at the source and drain sides should apply the low-k and high-k dielectrics, respectively, which can enhance the reading "1" current and reduce reading "0" current. Applying this optimized spacer engineering, the DGTFET DRAM obtains the optimum performance-extremely low reading "0" current (10 -14 A/μm) and large retention time (10s), which decreases its static power consumption and dynamic refresh rate. And the low reading "0" current also enhances its current ratio (10 7 ) of reading "1" to reading "0". Furthermore, the analysis about scalability reveals its inherent shortcoming, which offers the further investigation direction for DGTFET DRAM.

  9. The Optimization of Spacer Engineering for Capacitor-Less DRAM Based on the Dual-Gate Tunneling Transistor

    NASA Astrophysics Data System (ADS)

    Li, Wei; Liu, Hongxia; Wang, Shulong; Chen, Shupeng; Wang, Qianqiong

    2018-03-01

    The DRAM based on the dual-gate tunneling FET (DGTFET) has the advantages of capacitor-less structure and high retention time. In this paper, the optimization of spacer engineering for DGTFET DRAM is systematically investigated by Silvaco-Atlas tool to further improve its performance, including the reduction of reading "0" current and extension of retention time. The simulation results show that spacers at the source and drain sides should apply the low-k and high-k dielectrics, respectively, which can enhance the reading "1" current and reduce reading "0" current. Applying this optimized spacer engineering, the DGTFET DRAM obtains the optimum performance-extremely low reading "0" current (10-14A/μm) and large retention time (10s), which decreases its static power consumption and dynamic refresh rate. And the low reading "0" current also enhances its current ratio (107) of reading "1" to reading "0". Furthermore, the analysis about scalability reveals its inherent shortcoming, which offers the further investigation direction for DGTFET DRAM.

  10. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  11. Mechanical models of sandfish locomotion reveal principles of high performance subsurface sand-swimming

    PubMed Central

    Maladen, Ryan D.; Ding, Yang; Umbanhowar, Paul B.; Kamor, Adam; Goldman, Daniel I.

    2011-01-01

    We integrate biological experiment, empirical theory, numerical simulation and a physical model to reveal principles of undulatory locomotion in granular media. High-speed X-ray imaging of the sandfish lizard, Scincus scincus, in 3 mm glass particles shows that it swims within the medium without using its limbs by propagating a single-period travelling sinusoidal wave down its body, resulting in a wave efficiency, η, the ratio of its average forward speed to the wave speed, of approximately 0.5. A resistive force theory (RFT) that balances granular thrust and drag forces along the body predicts η close to the observed value. We test this prediction against two other more detailed modelling approaches: a numerical model of the sandfish coupled to a discrete particle simulation of the granular medium, and an undulatory robot that swims within granular media. Using these models and analytical solutions of the RFT, we vary the ratio of undulation amplitude to wavelength (A/λ) and demonstrate an optimal condition for sand-swimming, which for a given A results from the competition between η and λ. The RFT, in agreement with the simulated and physical models, predicts that for a single-period sinusoidal wave, maximal speed occurs for A/λ ≈ 0.2, the same kinematics used by the sandfish. PMID:21378020

  12. Inference for Stochastic Chemical Kinetics Using Moment Equations and System Size Expansion.

    PubMed

    Fröhlich, Fabian; Thomas, Philipp; Kazeroonian, Atefeh; Theis, Fabian J; Grima, Ramon; Hasenauer, Jan

    2016-07-01

    Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity.

  13. Inference for Stochastic Chemical Kinetics Using Moment Equations and System Size Expansion

    PubMed Central

    Thomas, Philipp; Kazeroonian, Atefeh; Theis, Fabian J.; Grima, Ramon; Hasenauer, Jan

    2016-01-01

    Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity. PMID:27447730

  14. Evaluation of traffic signal timing optimization methods using a stochastic and microscopic simulation program.

    DOT National Transportation Integrated Search

    2003-01-01

    This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...

  15. Teaching childbirth with high-fidelity simulation. Is it better observing the scenario during the briefing session?

    PubMed

    Cuerva, Marcos J; Piñel, Carlos S; Martin, Lourdes; Espinosa, Jose A; Corral, Octavio J; Mendoza, Nicolás

    2018-02-12

    The design of optimal courses for obstetric undergraduate teaching is a relevant question. This study evaluates two different designs of simulator-based learning activity on childbirth with regard to respect to the patient, obstetric manoeuvres, interpretation of cardiotocography tracings (CTG) and infection prevention. This randomised experimental study which differs in the content of their briefing sessions consisted of two groups of undergraduate students, who performed two simulator-based learning activities on childbirth. The first briefing session included the observations of a properly performed scenario according to Spanish clinical practice guidelines on care in normal childbirth by the teachers whereas the second group did not include the observations of a properly performed scenario, and the students observed it only after the simulation process. The group that observed a properly performed scenario after the simulation obtained worse grades during the simulation, but better grades during the debriefing and evaluation. Simulator use in childbirth may be more fruitful when the medical students observe correct performance at the completion of the scenario compared to that at the start of the scenario. Impact statement What is already known on this subject? There is a scarcity of literature about the design of optimal high-fidelity simulation training in childbirth. It is known that preparing simulator-based learning activities is a complex process. Simulator-based learning includes the following steps: briefing, simulation, debriefing and evaluation. The most important part of high-fidelity simulations is the debriefing. A good briefing and simulation are of high relevance in order to have a fruitful debriefing session. What do the results of this study add? Our study describes a full simulator-based learning activity on childbirth that can be reproduced in similar facilities. The findings of this study add that high-fidelity simulation training in childbirth is favoured by a short briefing session and an abrupt start to the scenario, rather than a long briefing session that includes direct instruction in the scenario. What are the implications of these findings for clinical practice and/or further research? The findings of this study reveal what to include in the briefing of simulator-based learning activities on childbirth. These findings have implications in medical teaching and in medical practice.

  16. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    ERIC Educational Resources Information Center

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  17. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands.

    PubMed

    Salem, Salem Ibrahim; Higa, Hiroto; Kim, Hyungjun; Kobayashi, Hiroshi; Oki, Kazuo; Oki, Taikan

    2017-07-31

    Numerous algorithms have been proposed to retrieve chlorophyll- a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m -3 , 16.25 mg·m -3 , and 19.05 mg·m -3 , respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll- a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m -3 ), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m -3 ).

  18. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands

    PubMed Central

    Higa, Hiroto; Kobayashi, Hiroshi; Oki, Kazuo

    2017-01-01

    Numerous algorithms have been proposed to retrieve chlorophyll-a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m−3, 16.25 mg·m−3, and 19.05 mg·m−3, respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll-a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m−3), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m−3). PMID:28758984

  19. PLL application research of a broadband MEMS phase detector: Theory, measurement and modeling

    NASA Astrophysics Data System (ADS)

    Han, Juzheng; Liao, Xiaoping

    2017-06-01

    This paper evaluates the capability of a broadband MEMS phase detector in the application of phase locked loops (PLLs) through the aspect of theory, measurement and modeling. For the first time, it demonstrates how broadband property and optimized structure are realized through cascaded transmission lines and ANSYS simulations. The broadband MEMS phase detector shows potential in PLL application for its dc voltage output and large power handling ability which is important for munition applications. S-parameters of the power combiner in the MEMS phase detector are measured with S11 better than -15 dB and S23 better than -10 dB over the whole X-band. Compared to our previous works, developed phase detection measurements are performed and focused on signals at larger power levels up to 1 W. Cosine tendencies are revealed between the output voltage and the phase difference for both small and large signals. Simulation approach through equivalent circuit modeling is proposed to study the PLL application of the broadband MEMS phase detector. Synchronization and tracking properties are revealed.

  20. Multi-model groundwater-management optimization: reconciling disparate conceptual models

    NASA Astrophysics Data System (ADS)

    Timani, Bassel; Peralta, Richard

    2015-09-01

    Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.

  1. Free Energy Minimization by Simulated Annealing with Applications to Lithospheric Slabs and Mantle Plumes

    NASA Astrophysics Data System (ADS)

    Bina, C. R.

    An optimization algorithm based upon the method of simulated annealing is of utility in calculating equilibrium phase assemblages as functions of pressure, temperature, and chemical composi tion. Operating by analogy to the statistical mechanics of the chemical system, it is applicable both to problems of strict chemical equilibrium and to problems involving metastability. The method reproduces known phase diagrams and illustrates the expected thermal deflection of phase transitions in thermal models of subducting lithospheric slabs and buoyant mantle plumes. It reveals temperature-induced changes in phase transition sharpness and the stability of Fe-rich γ phase within an α+γ field in cold slab thermal models, and it suggests that transitions such as the possible breakdown of silicate perovskite to mixed oxides can amplify velocity anomalies.

  2. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  3. Optimizing Cognitive Load for Learning from Computer-Based Science Simulations

    ERIC Educational Resources Information Center

    Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.

    2006-01-01

    How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…

  4. Automatic CT simulation optimization for radiation therapy: A general strategy.

    PubMed

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.

  5. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  6. Using Two Simulation Tools to Teach Concepts in Introductory Astronomy: A Design-Based Research Approach

    NASA Astrophysics Data System (ADS)

    Maher, Pamela A.

    Technology in college classrooms has gone from being an enhancement to the learning experience to being something expected by both instructors and students. This design-based research investigation takes technology one step further, putting the tools used to teach directly in the hands of students. The study examined the affordances and constraints of two simulation tools for use in introductory astronomy courses. The variety of experiences participants had using two tools; a virtual reality headset and fulldome immersive planetarium simulation, to manipulate a lunar surface flyby were identified using a multi-method research approach with N = 67 participants. Participants were recruited from classes of students taking astronomy over one academic year at a two-year college. Participants manipulated a lunar flyby using a virtual reality headset and a motion sensor device in the college fulldome planetarium. Data were collected in the form of two post-treatment questionnaires using Likert-type scales and one small group interview. The small group interview was intended to elicit various experiences participants had using the tools. Responses were analyzed quantitatively for optimal flyby speed and qualitatively for salient themes using data reduction informed by a methodological framework of phenomenography to identify the variety of experiences participants had using the tools. Findings for optimal flyby speed of the Moon based on analysis of data for both the Immersion Questionnaire and the Simulator Sickness Questionnaire done using SPSS software determine that the optimal flyby speed for college students to manipulate the Moon was calculated to be .04 x the radius of the Earth (3,959 miles) or 160 miles per second. A variety of different participant experiences were revealed using MAXQDA software to code positive and negative remarks participants had when engaged in the use of each tool. Both tools offer potential to actively engage students with astronomy content in college lecture and laboratory courses.

  7. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  8. Monte Carlo ray-tracing simulations of luminescent solar concentrators for building integrated photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.

    2013-09-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.

  9. Design, biometric simulation and optimization of a nano-enabled scaffold device for enhanced delivery of dopamine to the brain.

    PubMed

    Pillay, Samantha; Pillay, Viness; Choonara, Yahya E; Naidoo, Dinesh; Khan, Riaz A; du Toit, Lisa C; Ndesendo, Valence M K; Modi, Girish; Danckwerts, Michael P; Iyuke, Sunny E

    2009-12-01

    This study focused on the design, biometric simulation and optimization of an intracranial nano-enabled scaffold device (NESD) for the site-specific delivery of dopamine (DA) as a strategy to minimize the peripheral side-effects of conventional forms of Parkinson's disease therapy. The NESD was modulated through biometric simulation and computational prototyping to produce a binary crosslinked alginate scaffold embedding stable DA-loaded cellulose acetate phthalate (CAP) nanoparticles optimized in accordance with Box-Behnken statistical designs. The physicomechanical properties of the NESD were characterized and in vitro and in vivo release studies performed. Prototyping predicted a 3D NESD model with enhanced internal micro-architecture. SEM and TEM revealed spherical, uniform and non-aggregated DA-loaded nanoparticles with the presence of CAP (FTIR bands at 1070, 1242 and 2926 cm(-1)). An optimum nanoparticle size of 197 nm (PdI=0.03), a zeta potential of -34.00 mV and a DEE of 63% was obtained. The secondary crosslinker BaCl(2) imparted crystallinity resulting in significant thermal shifts between native CAP (T(g)=160-170 degrees C; T(m)=192 degrees C) and CAP nanoparticles (T(g)=260 degrees C; T(m)=268 degrees C). DA release displayed an initial lag phase of 24 h and peaked after 3 days, maintaining favorable CSF (10 microg/mL) versus systemic concentrations (1-2 microg/mL) over 30 days and above the inherent baseline concentration of DA (1 microg/mL) following implantation in the parenchyma of the frontal lobe of the Sprague-Dawley rat model. The strategy of coupling polymeric scaffold science and nanotechnology enhanced the site-specific delivery of DA from the NESD.

  10. BEM-based simulation of lung respiratory deformation for CT-guided biopsy.

    PubMed

    Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu

    2017-09-01

    Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.

  11. Optimal Predictive Control for Path Following of a Full Drive-by-Wire Vehicle at Varying Speeds

    NASA Astrophysics Data System (ADS)

    SONG, Pan; GAO, Bolin; XIE, Shugang; FANG, Rui

    2017-05-01

    The current research of the global chassis control problem for the full drive-by-wire vehicle focuses on the control allocation (CA) of the four-wheel-distributed traction/braking/steering systems. However, the path following performance and the handling stability of the vehicle can be enhanced a step further by automatically adjusting the vehicle speed to the optimal value. The optimal solution for the combined longitudinal and lateral motion control (MC) problem is given. First, a new variable step-size spatial transformation method is proposed and utilized in the prediction model to derive the dynamics of the vehicle with respect to the road, such that the tracking errors can be explicitly obtained over the prediction horizon at varying speeds. Second, a nonlinear model predictive control (NMPC) algorithm is introduced to handle the nonlinear coupling between any two directions of the vehicular planar motion and computes the sequence of the optimal motion states for following the desired path. Third, a hierarchical control structure is proposed to separate the motion controller into a NMPC based path planner and a terminal sliding mode control (TSMC) based path follower. As revealed through off-line simulations, the hierarchical methodology brings nearly 1700% improvement in computational efficiency without loss of control performance. Finally, the control algorithm is verified through a hardware in-the-loop simulation system. Double-lane-change (DLC) test results show that by using the optimal predictive controller, the root-mean-square (RMS) values of the lateral deviations and the orientation errors can be reduced by 41% and 30%, respectively, comparing to those by the optimal preview acceleration (OPA) driver model with the non-preview speed-tracking method. Additionally, the average vehicle speed is increased by 0.26 km/h with the peak sideslip angle suppressed to 1.9°. This research proposes a novel motion controller, which provides the full drive-by-wire vehicle with better lane-keeping and collision-avoidance capabilities during autonomous driving.

  12. A detailed examination of laser-ion acceleration mechanisms in the relativistic transparency regime using tracers

    NASA Astrophysics Data System (ADS)

    Stark, David J.; Yin, Lin; Albright, Brian J.; Nystrom, William; Bird, Robert

    2018-04-01

    We present a particle-in-cell study of linearly polarized laser-ion acceleration systems, in which we use both two-dimensional (2D) and three-dimensional (3D) simulations to characterize the ion acceleration mechanisms in targets which become transparent to the laser pulse during irradiation. First, we perform a target length scan to optimize the peak ion energies in both 2D and 3D, and the predictive capabilities of 2D simulations are discussed. Tracer analysis allows us to isolate the acceleration into stages of target normal sheath acceleration (TNSA), hole boring (HB), and break-out afterburner (BOA) acceleration, which vary in effectiveness based on the simulation parameters. The thinnest targets reveal that enhanced TNSA is responsible for accelerating the most energetic ions, whereas the thickest targets have ions undergoing successive phases of HB and TNSA (in 2D) or BOA and TNSA (in 3D); HB is not observed to be a dominant acceleration mechanism in the 3D simulations. It is in the intermediate optimal regime, both when the laser breaks through the target with appreciable amplitude and when there is enough plasma to form a sustained high density flow, that BOA is most effective and is responsible for the most energetic ions. Eliminating the transverse laser spot size effects by performing a plane wave simulation, we can isolate with greater confidence the underlying physics behind the ion dynamics we observe. Specifically, supplemented by wavelet and FFT analyses, we match the post-transparency BOA acceleration with a wave-particle resonance with a high-amplitude low-frequency electrostatic wave of increasing phase velocity, consistent with that predicted by the Buneman instability.

  13. Multiobjective optimization of low impact development stormwater controls

    NASA Astrophysics Data System (ADS)

    Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati

    2018-07-01

    Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.

  14. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  15. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  16. Numerical calculation and analysis of radial force on the single-action vane pump

    NASA Astrophysics Data System (ADS)

    Y He, Y.; Y Kong, F.

    2013-12-01

    Unbalanced radial force is a serious adversity that restricts the working pressure and reduces service life of the single-action vane pump. For revealing and predicting the distribution of radial force on the rotor, a numerical simulation about its transient flow field was performed by using dynamic mesh method with RNG κ ε-turbulent model. The details of transient flow characteristic and pressure fluctuation were obtained, and the radial force and periodic variation can be calculated based on the details. The results show: the radial force has a close relationship with the pressure pulsation; the radial force can be reduced drastically by optimizing the angle of port plate and installing the V-shaped cavity; if the odd number vanes are chosen, it will help reduce the radial force of rotor and optimize the pressure fluctuation effectively.

  17. PSO-based PID Speed Control of Traveling Wave Ultrasonic Motor under Temperature Disturbance

    NASA Astrophysics Data System (ADS)

    Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Azmi, Nur Iffah Mohamed; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Traveling wave ultrasonic motors (TWUSMs) have a time varying dynamics characteristics. Temperature rise in TWUSMs remains a problem particularly in sustaining optimum speed performance. In this study, a PID controller is used to control the speed of TWUSM under temperature disturbance. Prior to developing the controller, a linear approximation model which relates the speed to the temperature is developed based on the experimental data. Two tuning methods are used to determine PID parameters: conventional Ziegler-Nichols(ZN) and particle swarm optimization (PSO). The comparison of speed control performance between PSO-PID and ZN-PID is presented. Modelling, simulation and experimental work is carried out utilizing Fukoku-Shinsei USR60 as the chosen TWUSM. The results of the analyses and experimental work reveal that PID tuning using PSO-based optimization has the advantage over the conventional Ziegler-Nichols method.

  18. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  19. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  20. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  1. Design of underwater robot lines based on a hybrid automatic optimization strategy

    NASA Astrophysics Data System (ADS)

    Lyu, Wenjing; Luo, Weilin

    2014-09-01

    In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.

  2. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  3. A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing.

    PubMed

    Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju

    2004-10-01

    Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

  4. An Adaptive Cooperative Strategy for Underlay MIMO Cognitive Radio Networks: An Opportunistic and Low-Complexity Approach

    NASA Astrophysics Data System (ADS)

    Mazoochi, M.; Pourmina, M. A.; Bakhshi, H.

    2015-03-01

    The core aim of this work is the maximization of the achievable data rate of the secondary user pairs (SU pairs), while ensuring the QoS of primary users (PUs). All users are assumed to be equipped with multiple antennas. It is assumed that when PUs are present, the direct communications between SU pairs introduces intolerable interference to PUs and thereby SUs transmit signal using the cooperation of other SUs and avoid transmitting in the direct channel. In brief, an adaptive cooperative strategy for multiple-input/multiple-output (MIMO) cognitive radio networks is proposed. At the presence of PUs, the issue of joint relay selection and power allocation in Underlay MIMO Cooperative Cognitive Radio Networks (U-MIMO-CCRN) is addressed. The optimal approach for determining the power allocation and the cooperating SU is proposed. Besides, the outage probability of the proposed communication protocol is further derived. Due to high complexity of the optimal approach, a low-complexity approach is further proposed and its performance is evaluated using simulations. The simulation results reveal that the performance loss due to the low-complexity approach is only about 14%, while the complexity is greatly reduced.

  5. Observability-Based Guidance and Sensor Placement

    NASA Astrophysics Data System (ADS)

    Hinson, Brian T.

    Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.

  6. Optimal Inlet Shape Design of N2B Hybrid Wing Body Configuration

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungjin; Liou, Meng-Sing

    2012-01-01

    The N2B hybrid wing body aircraft was conceptually designed to meet environmental and performance goals for the N+2 generation transport set by the Subsonic Fixed Wing project of NASA Fundamental Aeronautics Program. In the present study, flow simulations are conducted around the N2B configuration by a Reynolds-averaged Navier-Stokes flow solver using unstructured meshes. Boundary conditions at engine fan face and nozzle exhaust planes are provided by the NPSS thermodynamic engine cycle model. The flow simulations reveal challenging design issues arising from boundary layer ingestion offset inlet and airframe-propulsion integration. Adjoint-based optimal designs are then conducted for the inlet shape to minimize the airframe drag force and flow distortion at fan faces. Design surfaces are parameterized by NURBS, and the cowl lip geometry is modified by a spring analogy approach. By the drag minimization design, flow separation on the cowl surfaces are almost removed, and shock wave strength got remarkably reduced. For the distortion minimization design, a circumferential distortion indicator DPCP(sub avg) is adopted as the design objective and diffuser bottom and side wall surfaces are perturbed for the design. The distortion minimization results in a 12.5 % reduction in the objective function.

  7. Design principles and optimal performance for molecular motors under realistic constraints

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai; Cao, Yuansheng

    2018-02-01

    The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.

  8. Plate-impact loading of cellular structures formed by selective laser melting

    NASA Astrophysics Data System (ADS)

    Winter, R. E.; Cotton, M.; Harris, E. J.; Maw, J. R.; Chapman, D. J.; Eakins, D. E.; McShane, G.

    2014-03-01

    Porous materials are of great interest because of improved energy absorption over their solid counterparts. Their properties, however, have been difficult to optimize. Additive manufacturing has emerged as a potential technique to closely define the structure and properties of porous components, i.e. density, strut width and pore size; however, the behaviour of these materials at very high impact energies remains largely unexplored. We describe an initial study of the dynamic compression response of lattice materials fabricated through additive manufacturing. Lattices consisting of an array of intersecting stainless steel rods were fabricated into discs using selective laser melting. The resulting discs were impacted against solid stainless steel targets at velocities ranging from 300 to 700 m s-1 using a gas gun. Continuum CTH simulations were performed to identify key features in the measured wave profiles, while 3D simulations, in which the individual cells were modelled, revealed details of microscale deformation during collapse of the lattice structure. The validated computer models have been used to provide an understanding of the deformation processes in the cellular samples. The study supports the optimization of cellular structures for application as energy absorbers.

  9. An analysis of the surface-normal coupling efficiency of a metal grating coupler embedded in a Scotch tape optical waveguide

    NASA Astrophysics Data System (ADS)

    Barrios, Carlos Angulo; Canalejas-Tejero, Víctor

    2017-01-01

    The coupling efficiency at normal incidence of recently demonstrated aluminum grating couplers integrated in flexible Scotch tape waveguides has been analyzed theoretically and experimentally. Finite difference time domain (FDTD) and rigorously coupled wave analysis (RCWA) methods have been used to optimize the dimensions (duty cycle and metal thickness) of Scotch tape-embedded 1D Al gratings for maximum coupling at 635 nm wavelength. Good dimension and tape refractive index tolerances are predicted. FDTD simulations reveal the incident beam width and impinging position (alignment) values that avoid rediffraction and thus maximize the coupling efficiency. A 1D Al diffraction grating integrated into a Scotch tape optical waveguide has been fabricated and characterized. The fabrication process, based on pattern transfer, has been optimized to allow complete Al grating transfer onto the Scotch tape waveguide. A maximum coupling efficiency of 20% for TM-polarized normal incidence has been measured, which is in good agreement with the theoretical predictions. The measured coupling efficiency is further increased up to 28% for TM polarization under oblique incidence. Temperature dependence measurements have been also achieved and related to the simulations results and fabrication procedure.

  10. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  11. Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation

    PubMed Central

    Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee

    2018-01-01

    This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964

  12. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  13. Capacity improvement using simulation optimization approaches: A case study in the thermotechnology industry

    NASA Astrophysics Data System (ADS)

    Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz

    2015-02-01

    In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.

  14. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  15. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  16. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  17. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  18. Trp zipper folding kinetics by molecular dynamics and temperature-jump spectroscopy

    PubMed Central

    Snow, Christopher D.; Qiu, Linlin; Du, Deguo; Gai, Feng; Hagen, Stephen J.; Pande, Vijay S.

    2004-01-01

    We studied the microsecond folding dynamics of three β hairpins (Trp zippers 1–3, TZ1–TZ3) by using temperature-jump fluorescence and atomistic molecular dynamics in implicit solvent. In addition, we studied TZ2 by using time-resolved IR spectroscopy. By using distributed computing, we obtained an aggregate simulation time of 22 ms. The simulations included 150, 212, and 48 folding events at room temperature for TZ1, TZ2, and TZ3, respectively. The all-atom optimized potentials for liquid simulations (OPLSaa) potential set predicted TZ1 and TZ2 properties well; the estimated folding rates agreed with the experimentally determined folding rates and native conformations were the global potential-energy minimum. The simulations also predicted reasonable unfolding activation enthalpies. This work, directly comparing large simulated folding ensembles with multiple spectroscopic probes, revealed both the surprising predictive ability of current models as well as their shortcomings. Specifically, for TZ1–TZ3, OPLS for united atom models had a nonnative free-energy minimum, and the folding rate for OPLSaa TZ3 was sensitive to the initial conformation. Finally, we characterized the transition state; all TZs fold by means of similar, native-like transition-state conformations. PMID:15020773

  19. Trp zipper folding kinetics by molecular dynamics and temperature-jump spectroscopy

    NASA Astrophysics Data System (ADS)

    Snow, Christopher D.; Qiu, Linlin; Du, Deguo; Gai, Feng; Hagen, Stephen J.; Pande, Vijay S.

    2004-03-01

    We studied the microsecond folding dynamics of three hairpins (Trp zippers 1-3, TZ1-TZ3) by using temperature-jump fluorescence and atomistic molecular dynamics in implicit solvent. In addition, we studied TZ2 by using time-resolved IR spectroscopy. By using distributed computing, we obtained an aggregate simulation time of 22 ms. The simulations included 150, 212, and 48 folding events at room temperature for TZ1, TZ2, and TZ3, respectively. The all-atom optimized potentials for liquid simulations (OPLSaa) potential set predicted TZ1 and TZ2 properties well; the estimated folding rates agreed with the experimentally determined folding rates and native conformations were the global potential-energy minimum. The simulations also predicted reasonable unfolding activation enthalpies. This work, directly comparing large simulated folding ensembles with multiple spectroscopic probes, revealed both the surprising predictive ability of current models as well as their shortcomings. Specifically, for TZ1-TZ3, OPLS for united atom models had a nonnative free-energy minimum, and the folding rate for OPLSaa TZ3 was sensitive to the initial conformation. Finally, we characterized the transition state; all TZs fold by means of similar, native-like transition-state conformations.

  20. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  1. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  2. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    PubMed

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  3. Large-eddy simulation of propeller wake at design operating conditions

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Mahesh, Krishnan

    2016-11-01

    Understanding the propeller wake is crucial for efficient design and optimized performance. The dynamics of the propeller wake are also central to physical phenomena such as cavitation and acoustics. Large-eddy simulation is used to study the evolution of the wake of a five-bladed marine propeller from near to far field at design operating condition. The computed mean loads and phase-averaged flow field show good agreement with experiments. The propeller wake consisting of tip and hub vortices undergoes streamtube contraction, which is followed by the onset of instabilities as evident from the oscillations of the tip vortices. Simulation results reveal a mutual induction mechanism of instability where instead of the tip vortices interacting among themselves, they interact with the smaller vortices generated by the roll-up of the blade trailing edge wake in the near wake. Phase-averaged and ensemble-averaged flow fields are analyzed to explain the flow physics. This work is supported by ONR.

  4. a New Hybrid Yin-Yang Swarm Optimization Algorithm for Uncapacitated Warehouse Location Problems

    NASA Astrophysics Data System (ADS)

    Heidari, A. A.; Kazemizade, O.; Hakimpour, F.

    2017-09-01

    Yin-Yang-pair optimization (YYPO) is one of the latest metaheuristic algorithms (MA) proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO) is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO) stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL) problems. This efficient hierarchical PSO-based optimizer (PSOYPO) can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA), harmony search (HS), modified HS (OBCHS), and evolutionary simulated annealing (ESA). The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.

  5. Combining gait optimization with passive system to increase the energy efficiency of a humanoid robot walking movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less

  6. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  7. Simulation optimization of the cathode deposit growth in a coaxial electrolyzer-refiner

    NASA Astrophysics Data System (ADS)

    Smirnov, G. B.; Fokin, A. A.; Markina, S. E.; Vakhitov, A. I.

    2015-08-01

    The results of simulation of the cathode deposit growth in a coaxial electrolyzer-refiner are presented. The sizes of the initial cathode matrix are optimized. The data obtained by simulation and full-scale tests of the precipitation of platinum from a salt melt are compared.

  8. Swimming in a granular frictional fluid

    NASA Astrophysics Data System (ADS)

    Goldman, Daniel

    2012-02-01

    X-ray imaging reveals that the sandfish lizard swims within granular media (sand) using axial body undulations to propel itself without the use of limbs. To model the locomotion of the sandfish, we previously developed an empirical resistive force theory (RFT), a numerical sandfish model coupled to an experimentally validated Discrete Element Method (DEM) model of the granular medium, and a physical robot model. The models reveal that only grains close to the swimmer are fluidized, and that the thrust and drag forces are dominated by frictional interactions among grains and the intruder. In this talk I will use these models to discuss principles of swimming within these granular ``frictional fluids". The empirical drag force laws are measured as the steady-state forces on a small cylinder oriented at different angles relative to the displacement direction. Unlike in Newtonian fluids, resistive forces are independent of speed. Drag forces resemble those in viscous fluids while the ratio of thrust to drag forces is always larger in the granular media than in viscous fluids. Using the force laws as inputs, the RFT overestimates swimming speed by approximately 20%. The simulation reveals that this is related to the non-instantaneous increase in force during reversals of body segments. Despite the inaccuracy of the steady-state assumption, we use the force laws and a recently developed geometric mechanics theory to predict optimal gaits for a model system that has been well-studied in Newtonian fluids, the three-link swimmer. The combination of the geometric theory and the force laws allows us to generate a kinematic relationship between the swimmer's shape and position velocities and to construct connection vector field and constraint curvature function visualizations of the system dynamics. From these we predict optimal gaits for forward, lateral and rotational motion. Experiment and simulation are in accord with the theoretical prediction, and demonstrate that swimming in sand can be viewed as movement in a localized frictional fluid.

  9. Effects of optimized root water uptake parameterization schemes on water and heat flux simulation in a maize agroecosystem

    NASA Astrophysics Data System (ADS)

    Cai, Fu; Ming, Huiqing; Mi, Na; Xie, Yanbing; Zhang, Yushu; Li, Rongping

    2017-04-01

    As root water uptake (RWU) is an important link in the water and heat exchange between plants and ambient air, improving its parameterization is key to enhancing the performance of land surface model simulations. Although different types of RWU functions have been adopted in land surface models, there is no evidence as to which scheme most applicable to maize farmland ecosystems. Based on the 2007-09 data collected at the farmland ecosystem field station in Jinzhou, the RWU function in the Common Land Model (CoLM) was optimized with scheme options in light of factors determining whether roots absorb water from a certain soil layer ( W x ) and whether the baseline cumulative root efficiency required for maximum plant transpiration ( W c ) is reached. The sensibility of the parameters of the optimization scheme was investigated, and then the effects of the optimized RWU function on water and heat flux simulation were evaluated. The results indicate that the model simulation was not sensitive to W x but was significantly impacted by W c . With the original model, soil humidity was somewhat underestimated for precipitation-free days; soil temperature was simulated with obvious interannual and seasonal differences and remarkable underestimations for the maize late-growth stage; and sensible and latent heat fluxes were overestimated and underestimated, respectively, for years with relatively less precipitation, and both were simulated with high accuracy for years with relatively more precipitation. The optimized RWU process resulted in a significant improvement of CoLM's performance in simulating soil humidity, temperature, sensible heat, and latent heat, for dry years. In conclusion, the optimized RWU scheme available for the CoLM model is applicable to the simulation of water and heat flux for maize farmland ecosystems in arid areas.

  10. Topography-based Flood Planning and Optimization Capability Development Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R.; Tasseff, Byron A.; Bent, Russell W.

    2014-02-26

    Globally, water-related disasters are among the most frequent and costly natural hazards. Flooding inflicts catastrophic damage on critical infrastructure and population, resulting in substantial economic and social costs. NISAC is developing LeveeSim, a suite of nonlinear and network optimization models, to predict optimal barrier placement to protect critical regions and infrastructure during flood events. LeveeSim currently includes a high-performance flood model to simulate overland flow, as well as a network optimization model to predict optimal barrier placement during a flood event. The LeveeSim suite models the effects of flooding in predefined regions. By manipulating a domain’s underlying topography, developers alteredmore » flood propagation to reduce detrimental effects in areas of interest. This numerical altering of a domain’s topography is analogous to building levees, placing sandbags, etc. To induce optimal changes in topography, NISAC used a novel application of an optimization algorithm to minimize flooding effects in regions of interest. To develop LeveeSim, NISAC constructed and coupled hydrodynamic and optimization algorithms. NISAC first implemented its existing flood modeling software to use massively parallel graphics processing units (GPUs), which allowed for the simulation of larger domains and longer timescales. NISAC then implemented a network optimization model to predict optimal barrier placement based on output from flood simulations. As proof of concept, NISAC developed five simple test scenarios, and optimized topographic solutions were compared with intuitive solutions. Finally, as an early validation example, barrier placement was optimized to protect an arbitrary region in a simulation of the historic Taum Sauk dam breach.« less

  11. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Intermodulation distortion and linearity performance assessment of 50-nm gate length L-DUMGAC MOSFET for RFIC design

    NASA Astrophysics Data System (ADS)

    Chaujar, Rishu; Kaur, Ravneet; Saxena, Manoj; Gupta, Mridula; Gupta, R. S.

    2008-08-01

    The distortion and linearity behaviour of MOSFETs is imperative for low-noise applications and RFICs design. In this paper, an extensive study on the RF-distortion and linearity behaviour of Laterally Amalgamated DUal Material GAte Concave (L-DUMGAC) MOSFET is performed and the influence of technology variations such as gate length, negative junction depth (NJD), substrate bias, drain bias and gate material workfunction is explored using ATLAS device simulator. Simulation results reveal that L-DUMGAC MOSFET significantly enhances the linearity and intermodulation distortion performance in terms of figure of merit (FOM) metrics: V, V, IIP3, IMD3 and higher order transconductance coefficients: gm1, gm2, gm3, proving its efficacy for RFIC design. The work, thus, optimize the device's bias point for RFICs with higher efficiency and better linearity performance.

  13. Modeling of the adsorption breakthrough behaviors of Pb2+ in a fixed bed of ETS-10 adsorbent.

    PubMed

    Lv, Lu; Zhang, Yan; Wang, Kean; Ray, Ajay K; Zhao, X S

    2008-09-01

    On the basis of experimental breakthrough curves of lead ion adsorption on ETS-10 particles in a fixed-bed column, we simulated the breakthrough curves using the two-phase homogeneous diffusion model (TPHDM). Three important model parameters, namely the external mass-transfer coefficient (k(f)), effective intercrystal diffusivity (D(e)), and axial dispersion coefficient (D(L)), were optimally found to be 8.33x10(-5) m/s, 2.57x10(-10) m(2)/s, and 1.93x10(-10) m(2)/s, respectively. A good agreement was observed between the numerical simulation and the experimental results. Sensitivity analysis revealed that the value of D(e) dictates the model performance while the magnitude of k(f) primarily affects the initial breakthrough point of the breakthrough curves.

  14. GOSA, a simulated annealing-based program for global optimization of nonlinear problems, also reveals transyears

    PubMed Central

    Czaplicki, Jerzy; Cornélissen, Germaine; Halberg, Franz

    2009-01-01

    Summary Transyears in biology have been documented thus far by the extended cosinor approach, including linear-nonlinear rhythmometry. We here confirm the existence of transyears by simulated annealing, a method originally developed for a much broader use, but described and introduced herein for validating its application to time series. The method is illustrated both on an artificial test case with known components and on biological data. We provide a table comparing results by the two methods and trust that the procedure will serve the budding sciences of chronobiology (the study of mechanisms underlying biological time structure), chronomics (the mapping of time structures in and around us), and chronobioethics, using the foregoing disciplines to add to concern for illnesses of individuals, and to budding focus on diseases of nations and civilizations. PMID:20414480

  15. Lightning Damage of Carbon Fiber/Epoxy Laminates with Interlayers Modified by Nickel-Coated Multi-Walled Carbon Nanotubes

    NASA Astrophysics Data System (ADS)

    Dong, Qi; Wan, Guoshun; Xu, Yongzheng; Guo, Yunli; Du, Tianxiang; Yi, Xiaosu; Jia, Yuxi

    2017-12-01

    The numerical model of carbon fiber reinforced polymer (CFRP) laminates with electrically modified interlayers subjected to lightning strike is constructed through finite element simulation, in which both intra-laminar and inter-laminar lightning damages are considered by means of coupled electrical-thermal-pyrolytic analysis method. Then the lightning damage extents including the damage volume and maximum damage depth are investigated. The results reveal that the simulated lightning damages could be qualitatively compared to the experimental counterparts of CFRP laminates with interlayers modified by nickel-coated multi-walled carbon nanotubes (Ni-MWCNTs). With higher electrical conductivity of modified interlayer and more amount of modified interlayers, both damage volume and maximum damage depth are reduced. This work provides an effective guidance to the anti-lightning optimization of CFRP laminates.

  16. A Simulation of Readiness-Based Sparing Policies

    DTIC Science & Technology

    2017-06-01

    variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the...available in the optimization tools. 14. SUBJECT TERMS readiness-based sparing, discrete event simulation, optimization, multi-indenture...variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the

  17. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. CFD-based optimization in plastics extrusion

    NASA Astrophysics Data System (ADS)

    Eusterholz, Sebastian; Elgeti, Stefanie

    2018-05-01

    This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.

  19. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  20. Highly immersive virtual reality laparoscopy simulation: development and future aspects.

    PubMed

    Huber, Tobias; Wunderling, Tom; Paschold, Markus; Lang, Hauke; Kneist, Werner; Hansen, Christian

    2018-02-01

    Virtual reality (VR) applications with head-mounted displays (HMDs) have had an impact on information and multimedia technologies. The current work aimed to describe the process of developing a highly immersive VR simulation for laparoscopic surgery. We combined a VR laparoscopy simulator (LapSim) and a VR-HMD to create a user-friendly VR simulation scenario. Continuous clinical feedback was an essential aspect of the development process. We created an artificial VR (AVR) scenario by integrating the simulator video output with VR game components of figures and equipment in an operating room. We also created a highly immersive VR surrounding (IVR) by integrating the simulator video output with a [Formula: see text] video of a standard laparoscopy scenario in the department's operating room. Clinical feedback led to optimization of the visualization, synchronization, and resolution of the virtual operating rooms (in both the IVR and the AVR). Preliminary testing results revealed that individuals experienced a high degree of exhilaration and presence, with rare events of motion sickness. The technical performance showed no significant difference compared to that achieved with the standard LapSim. Our results provided a proof of concept for the technical feasibility of an custom highly immersive VR-HMD setup. Future technical research is needed to improve the visualization, immersion, and capability of interacting within the virtual scenario.

  1. Numerical Optimization of a Bifacial Bi-Glass Thin-Film a-Si:H Solar Cell for Higher Conversion Efficiency

    NASA Astrophysics Data System (ADS)

    Berrian, Djaber; Fathi, Mohamed; Kechouane, Mohamed

    2018-02-01

    Bifacial solar cells that maximize the energy output per a square meter have become a new fashion in the field of photovoltaic cells. However, the application of thin-film material on bifacial solar cells, viz., thin-film amorphous hydrogenated silicon ( a- Si:H), is extremely rare. Therefore, this paper presents the optimization and influence of the band gap, thickness and doping on the performance of a glass/glass thin-film a- Si:H ( n- i- p) bifacial solar cell, using a computer-aided simulation tool, Automat for simulation of hetero-structures (AFORS-HET). It is worth mentioning that the thickness and the band gap of the i-layer are the key parameters in achieving higher efficiency and hence it has to be handled carefully during the fabrication process. Furthermore, an efficient thin-film a- Si:H bifacial solar cell requires thinner and heavily doped n and p emitter layers. On the other hand, the band gap of the p-layer showed a dramatic reduction of the efficiency at 2.3 eV. Moreover, a high bifaciality factor of more than 92% is attained, and top efficiency of 10.9% is revealed under p side illumination. These optimizations demonstrate significant enhancements of the recent experimental work on thin-film a- Si:H bifacial solar cells and would also be useful for future experimental investigations on an efficient a- Si:H thin-film bifacial solar cell.

  2. Numerical simulations on active shielding methods comparison and wrapped angle optimization for gradient coil design in MRI with enhanced shielding effect

    NASA Astrophysics Data System (ADS)

    Wang, Yaohui; Xin, Xuegang; Guo, Lei; Chen, Zhifeng; Liu, Feng

    2018-05-01

    The switching of a gradient coil current in magnetic resonance imaging will induce an eddy current in the surrounding conducting structures while the secondary magnetic field produced by the eddy current is harmful for the imaging. To minimize the eddy current effects, the stray field shielding in the gradient coil design is usually realized by minimizing the magnetic fields on the cryostat surface or the secondary magnetic fields over the imaging region. In this work, we explicitly compared these two active shielding design methods. Both the stray field and eddy current on the cryostat inner surface were quantitatively discussed by setting the stray field constraint with an ultra-low maximum intensity of 2 G and setting the secondary field constraint with an extreme small shielding ratio of 0.000 001. The investigation revealed that the secondary magnetic field control strategy can produce coils with a better performance. However, the former (minimizing the magnetic fields) is preferable when designing a gradient coil with an ultra-low eddy current that can also strictly control the stray field leakage at the edge of the cryostat inner surface. A wrapped-edge gradient coil design scheme was then optimized for a more effective control of the stray fields. The numerical simulation on the wrapped-edge coil design shows that the optimized wrapping angles for the x and z coils in terms of our coil dimensions are 40° and 90°, respectively.

  3. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering

    PubMed Central

    Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690

  4. Quantum-dot based nanothermometry in optical plasmonic recording media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maestro, Laura Martinez; Centre for Micro-Photonics, Faculty of Science, Engineering and Technology, Swinburne University of Technology, Hawthorn, Victoria 3122; Zhang, Qiming

    2014-11-03

    We report on the direct experimental determination of the temperature increment caused by laser irradiation in a optical recording media constituted by a polymeric film in which gold nanorods have been incorporated. The incorporation of CdSe quantum dots in the recording media allowed for single beam thermal reading of the on-focus temperature from a simple analysis of the two-photon excited fluorescence of quantum dots. Experimental results have been compared with numerical simulations revealing an excellent agreement and opening a promising avenue for further understanding and optimization of optical writing processes and media.

  5. Cross-Layer Protocol Combining Tree Routing and TDMA Slotting in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Bai, Ronggang; Ji, Yusheng; Lin, Zhiting; Wang, Qinghua; Zhou, Xiaofang; Qu, Yugui; Zhao, Baohua

    Being different from other networks, the load and direction of data traffic for wireless sensor networks are rather predictable. The relationships between nodes are cooperative rather than competitive. These features allow the design approach of a protocol stack to be able to use the cross-layer interactive way instead of a hierarchical structure. The proposed cross-layer protocol CLWSN optimizes the channel allocation in the MAC layer using the information from the routing tables, reduces the conflicting set, and improves the throughput. Simulations revealed that it outperforms SMAC and MINA in terms of delay and energy consumption.

  6. Structural and electronic properties of carbon nanotube-reinforced epoxy resins.

    PubMed

    Suggs, Kelvin; Wang, Xiao-Qian

    2010-03-01

    Nanocomposites of cured epoxy resin reinforced by single-walled carbon nanotubes exhibit a plethora of interesting behaviors at the molecular level. We have employed a combination of force-field-based molecular mechanics and first-principles calculations to study the corresponding binding and charge-transfer behavior. The simulation study of various nanotube species and curing agent configurations provides insight into the optimal structures in lieu of interfacial stability. An analysis of charge distributions of the epoxy functionalized semiconducting and metallic tubes reveals distinct level hybridizations. The implications of these results for understanding dispersion mechanism and future nano reinforced composite developments are discussed.

  7. Theory of lasing action in plasmonic crystals

    NASA Astrophysics Data System (ADS)

    Cuerda, J.; Rüting, F.; García-Vidal, F. J.; Bravo-Abad, J.

    2015-01-01

    We theoretically investigate lasing action in plasmonic crystals incorporating optically pumped four-level gain media. By using detailed simulations based on a time-domain generalization of the finite-element method, we show that the excitation of dark plasmonic resonances (via the gain medium) enables accessing the optimal lasing characteristics of the considered class of systems. Moreover, our study reveals that, in general, arrays of nanowires feature lower lasing thresholds and larger slope efficiencies than those corresponding to periodic arrays of subwavelength apertures. These findings are of relevance for further engineering of active devices based on plasmonic crystals.

  8. Ultrafast electron diffraction optimized for studying structural dynamics in thin films and monolayers

    PubMed Central

    Badali, D. S.; Gengler, R. Y. N.; Miller, R. J. D.

    2016-01-01

    A compact electron source specifically designed for time-resolved diffraction studies of free-standing thin films and monolayers is presented here. The sensitivity to thin samples is achieved by extending the established technique of ultrafast electron diffraction to the “medium” energy regime (1–10 kV). An extremely compact design, in combination with low bunch charges, allows for high quality diffraction in a lensless geometry. The measured and simulated characteristics of the experimental system reveal sub-picosecond temporal resolution, while demonstrating the ability to produce high quality diffraction patterns from atomically thin samples. PMID:27226978

  9. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    PubMed

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  11. Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madsen, N.; Meredith, S.

    Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less

  12. Integration of Local Observations into the One Dimensional Fog Model PAFOG

    NASA Astrophysics Data System (ADS)

    Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas

    2012-05-01

    The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.

  13. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  14. CFD optimization of continuous stirred-tank (CSTR) reactor for biohydrogen production.

    PubMed

    Ding, Jie; Wang, Xu; Zhou, Xue-Fei; Ren, Nan-Qi; Guo, Wan-Qian

    2010-09-01

    There has been little work on the optimal configuration of biohydrogen production reactors. This paper describes three-dimensional computational fluid dynamics (CFD) simulations of gas-liquid flow in a laboratory-scale continuous stirred-tank reactor used for biohydrogen production. To evaluate the role of hydrodynamics in reactor design and optimize the reactor configuration, an optimized impeller design has been constructed and validated with CFD simulations of the normal and optimized impeller over a range of speeds and the numerical results were also validated by examination of residence time distribution. By integrating the CFD simulation with an ethanol-type fermentation process experiment, it was shown that impellers with different type and speed generated different flow patterns, and hence offered different efficiencies for biohydrogen production. The hydrodynamic behavior of the optimized impeller at speeds between 50 and 70 rev/min is most suited for economical biohydrogen production. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Design of a finger base-type pulse oximeter

    NASA Astrophysics Data System (ADS)

    Lin, Bor-Shyh; Huang, Cheng-Yang; Chen, Chien-Yue; Lin, Jiun-Hung

    2016-01-01

    A pulse oximeter is a common medical instrument used for noninvasively monitoring arterial oxygen saturation (SpO2). Currently, the fingertip-type pulse oximeter is the prevalent type of pulse oximeter used. However, it is inconvenient for long-term monitoring, such as that under motion. In this study, a wearable and wireless finger base-type pulse oximeter was designed and implemented using the tissue optical simulation technique and the Monte Carlo method. The results revealed that a design involving placing the light source at 135°-165° and placing the detector at 75°-90° or 90°-105° yields the optimal conditions for measuring SpO2. Finally, the wearable and wireless finger base-type pulse oximeter was implemented and compared with the commercial fingertip-type pulse oximeter. The experimental results showed that the proposed optimal finger base-type pulse oximeter design can facilitate precise SpO2 measurement.

  16. Design of a finger base-type pulse oximeter.

    PubMed

    Lin, Bor-Shyh; Huang, Cheng-Yang; Chen, Chien-Yue; Lin, Jiun-Hung

    2016-01-01

    A pulse oximeter is a common medical instrument used for noninvasively monitoring arterial oxygen saturation (SpO2). Currently, the fingertip-type pulse oximeter is the prevalent type of pulse oximeter used. However, it is inconvenient for long-term monitoring, such as that under motion. In this study, a wearable and wireless finger base-type pulse oximeter was designed and implemented using the tissue optical simulation technique and the Monte Carlo method. The results revealed that a design involving placing the light source at 135°-165° and placing the detector at 75°-90° or 90°-105° yields the optimal conditions for measuring SpO2. Finally, the wearable and wireless finger base-type pulse oximeter was implemented and compared with the commercial fingertip-type pulse oximeter. The experimental results showed that the proposed optimal finger base-type pulse oximeter design can facilitate precise SpO2 measurement.

  17. Noise-induced escape in an excitable system

    NASA Astrophysics Data System (ADS)

    Khovanov, I. A.; Polovinkin, A. V.; Luchinsky, D. G.; McClintock, P. V. E.

    2013-03-01

    We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation.

  18. Structural optimization of Beach-Cleaner snatch mechanism

    NASA Astrophysics Data System (ADS)

    Ouyang, Lian-ge; Wei, Qin-rui; Zhou, Shui-ting; Peng, Qian; Zhao, Yuan-jiang; Wang, Fang

    2017-12-01

    In the working process of one Beach-Cleaner snatch institution, the second knuckle arm angular speed was too high, which resulted in the pick-up device would crash into the basic arm in the fold process. The rational position of joint to reduce the second knuckle arm angular speed and the force along the axis direction of the most dangerous point can be obtained from the kinematics simulation of snatch institution in the code of Automatic Dynamic Analysis off Mechanical Systems (ADAAMS). The feasible of scheme was validated by analyzing the optimized model in the software of ANSYS. The analysis results revealed: the open angle between the basic arm and the second knuckle arm improved from 125.0° too 135.24°, thee second knuckle arm angular speed decreased from 990.74rad/s to 58.53 rad/s, Not only improved work efficiency of snatch institution, but also prolonged its operation smoothness.

  19. Preparation and Characterization of Organic-Inorganic Hybrid Macrocyclic Compounds: Cyclic Ladder-like Polyphenylsilsesquioxanes.

    PubMed

    Zhang, Wenchao; Wang, Xiaoxia; Wu, Yiwei; Qi, Zhi; Yang, Rongjie

    2018-04-02

    Organic-inorganic hybrid macrocyclic compounds, cyclic polyphenylsilsesquioxanes (cyc-PSQs), have been synthesized through hydrolysis and condensation reactions of phenyltrichlorosilane. Structural characterization has revealed that cyc-PSQs consist of a closed-ring double-chain siloxane inorganic backbone bearing organic phenyl groups. The cyc-PSQ molecules have been simulated and structurally optimized using the Forcite tool as implemented in Materials Studio. Structurally optimized cyc-PSQs are highly symmetrical and regular with high stereoregularity, consistent with the dimensions of their experimentally derived structures. Thermogravimetric analysis showed that these macrocyclic compounds have excellent thermal stability. In addition to these perfectly structured compounds, macrocyclic compounds with the same ring ladder structure but bearing an additional Si-OH group, cyc-PSQs-OH, have also been synthesized. A possible mechanism for the formation of the closed-ring molecular structures of cyc-PSQs and cyc-PSQs-OH is proposed.

  20. Fractional Control of An Active Four-wheel-steering Vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Tianting; Tong, Jun; Chen, Ning; Tian, Jie

    2018-03-01

    A four-wheel-steering (4WS) vehicle model and reference model with a drop filter are constructed. The decoupling of 4WS vehicle model is carried out. And a fractional PIλDμ controller is introduced into the decoupling strategy to reduce the effects of the uncertainty of the vehicle parameters as well as the unmodelled dynamics on the system performance. Based on optimization techniques, the design of fractional controller are obtained to ensure the robustness of 4WS vehicle during the special range of frequencies through proper choice of the constraints. In order to compare with fractional robust controller, an optimal controller for the same vehicle is also designed. The simulations of the two control systems are carried out and it reveals that the decoupling and fractional robust controller is able to make vehicle model trace the reference model very well with better robustness.

  1. Optimization of the random multilayer structure to break the random-alloy limit of thermal conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Gu, Chongjie; Ruan, Xiulin, E-mail: ruan@purdue.edu

    2015-02-16

    A low lattice thermal conductivity (κ) is desired for thermoelectrics, and a highly anisotropic κ is essential for applications such as magnetic layers for heat-assisted magnetic recording, where a high cross-plane (perpendicular to layer) κ is needed to ensure fast writing while a low in-plane κ is required to avoid interaction between adjacent bits of data. In this work, we conduct molecular dynamics simulations to investigate the κ of superlattice (SL), random multilayer (RML) and alloy, and reveal that RML can have 1–2 orders of magnitude higher anisotropy in κ than SL and alloy. We systematically explore how the κmore » of SL, RML, and alloy changes relative to each other for different bond strength, interface roughness, atomic mass, and structure size, which provides guidance for choosing materials and structural parameters to build RMLs with optimal performance for specific applications.« less

  2. Number of discernible object colors is a conundrum.

    PubMed

    Masaoka, Kenichiro; Berns, Roy S; Fairchild, Mark D; Moghareh Abed, Farhad

    2013-02-01

    Widely varying estimates of the number of discernible object colors have been made by using various methods over the past 100 years. To clarify the source of the discrepancies in the previous, inconsistent estimates, the number of discernible object colors is estimated over a wide range of color temperatures and illuminance levels using several chromatic adaptation models, color spaces, and color difference limens. Efficient and accurate models are used to compute optimal-color solids and count the number of discernible colors. A comprehensive simulation reveals limitations in the ability of current color appearance models to estimate the number of discernible colors even if the color solid is smaller than the optimal-color solid. The estimates depend on the color appearance model, color space, and color difference limen used. The fundamental problem lies in the von Kries-type chromatic adaptation transforms, which have an unknown effect on the ranking of the number of discernible colors at different color temperatures.

  3. A Homogenization Approach for Design and Simulation of Blast Resistant Composites

    NASA Astrophysics Data System (ADS)

    Sheyka, Michael

    Structural composites have been used in aerospace and structural engineering due to their high strength to weight ratio. Composite laminates have been successfully and extensively used in blast mitigation. This dissertation examines the use of the homogenization approach to design and simulate blast resistant composites. Three case studies are performed to examine the usefulness of different methods that may be used in designing and optimizing composite plates for blast resistance. The first case study utilizes a single degree of freedom system to simulate the blast and a reliability based approach. The first case study examines homogeneous plates and the optimal stacking sequence and plate thicknesses are determined. The second and third case studies use the homogenization method to calculate the properties of composite unit cell made of two different materials. The methods are integrated with dynamic simulation environments and advanced optimization algorithms. The second case study is 2-D and uses an implicit blast simulation, while the third case study is 3-D and simulates blast using the explicit blast method. Both case studies 2 and 3 rely on multi-objective genetic algorithms for the optimization process. Pareto optimal solutions are determined in case studies 2 and 3. Case study 3 is an integrative method for determining optimal stacking sequence, microstructure and plate thicknesses. The validity of the different methods such as homogenization, reliability, explicit blast modeling and multi-objective genetic algorithms are discussed. Possible extension of the methods to include strain rate effects and parallel computation is also examined.

  4. Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, P. S.; Chiu, Y.

    2015-12-01

    In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution

  5. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  6. Optimization of the MINERVA Exoplanet Search Strategy via Simulations

    NASA Astrophysics Data System (ADS)

    Nava, Chantell; Johnson, Samson; McCrady, Nate; Minerva

    2015-01-01

    Detection of low-mass exoplanets requires high spectroscopic precision and high observational cadence. MINERVA is a dedicated observatory capable of sub meter-per-second radial velocity precision. As a dedicated observatory, MINERVA can observe with every-clear-night cadence that is essential for low-mass exoplanet detection. However, this cadence complicates the determination of an optimal observing strategy. We simulate MINERVA observations to optimize our observing strategy and maximize exoplanet detections. A dispatch scheduling algorithm provides observations of MINERVA targets every day over a three-year observing campaign. An exoplanet population with a distribution informed by Kepler statistics is assigned to the targets, and radial velocity curves induced by the planets are constructed. We apply a correlated noise model that realistically simulates stellar astrophysical noise sources. The simulated radial velocity data is fed to the MINERVA planet detection code and the expected exoplanet yield is calculated. The full simulation provides a tool to test different strategies for scheduling observations of our targets and optimizing the MINERVA exoplanet search strategy.

  7. Analysis of the Characteristics of a Rotary Stepper Micromotor

    NASA Astrophysics Data System (ADS)

    Sone, Junji; Mizuma, Toshinari; Masunaga, Masakazu; Mochizuki, Shunsuke; Sarajic, Edin; Yamahata, Christophe; Fujita, Hiroyuki

    A 3-phase electrostatic stepper micromotor was developed. To improve its performance for actual use, we have conducted numerical simulation to optimize the design. An improved simulation method is needed for calculation of various cases. To conduct circuit simulation of this micromotor, its structure is simplified, and a function for computing the force excited by the electrostatic field is added to the circuit simulator. We achieved a reasonably accurate simulation. We also considered an optimal drive waveform to achieve low-voltage operation.

  8. Biologically driven neural platform invoking parallel electrophoretic separation and urinary metabolite screening.

    PubMed

    Page, Tessa; Nguyen, Huong Thi Huynh; Hilts, Lindsey; Ramos, Lorena; Hanrahan, Grady

    2012-06-01

    This work reveals a computational framework for parallel electrophoretic separation of complex biological macromolecules and model urinary metabolites. More specifically, the implementation of a particle swarm optimization (PSO) algorithm on a neural network platform for multiparameter optimization of multiplexed 24-capillary electrophoresis technology with UV detection is highlighted. Two experimental systems were examined: (1) separation of purified rabbit metallothioneins and (2) separation of model toluene urinary metabolites and selected organic acids. Results proved superior to the use of neural networks employing standard back propagation when examining training error, fitting response, and predictive abilities. Simulation runs were obtained as a result of metaheuristic examination of the global search space with experimental responses in good agreement with predicted values. Full separation of selected analytes was realized after employing optimal model conditions. This framework provides guidance for the application of metaheuristic computational tools to aid in future studies involving parallel chemical separation and screening. Adaptable pseudo-code is provided to enable users of varied software packages and modeling framework to implement the PSO algorithm for their desired use.

  9. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  10. Geometry optimization for micro-pressure sensor considering dynamic interference

    NASA Astrophysics Data System (ADS)

    Yu, Zhongliang; Zhao, Yulong; Li, Lili; Tian, Bian; Li, Cun

    2014-09-01

    Presented is the geometry optimization for piezoresistive absolute micro-pressure sensor. A figure of merit called the performance factor (PF) is defined as a quantitative index to describe the comprehensive performances of a sensor including sensitivity, resonant frequency, and acceleration interference. Three geometries are proposed through introducing islands and sensitive beams into typical flat diaphragm. The stress distributions of sensitive elements are analyzed by finite element method. Multivariate fittings based on ANSYS simulation results are performed to establish the equations about surface stress, deflection, and resonant frequency. Optimization by MATLAB is carried out to determine the dimensions of the geometries. Convex corner undercutting is evaluated. Each PF of the three geometries with the determined dimensions is calculated and compared. Silicon bulk micromachining is utilized to fabricate the prototypes of the sensors. The outputs of the sensors under both static and dynamic conditions are tested. Experimental results demonstrate the rationality of the defined performance factor and reveal that the geometry with quad islands presents the highest PF of 210.947 Hz1/4. The favorable overall performances enable the sensor more suitable for altimetry.

  11. Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain

    NASA Astrophysics Data System (ADS)

    Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida

    2013-04-01

    Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.

  12. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  13. Structure of the myotonic dystrophy type 2 RNA and designed small molecules that reduce toxicity.

    PubMed

    Childs-Disney, Jessica L; Yildirim, Ilyas; Park, HaJeung; Lohman, Jeremy R; Guan, Lirui; Tran, Tuan; Sarkar, Partha; Schatz, George C; Disney, Matthew D

    2014-02-21

    Myotonic dystrophy type 2 (DM2) is an incurable neuromuscular disorder caused by a r(CCUG) expansion (r(CCUG)(exp)) that folds into an extended hairpin with periodically repeating 2×2 nucleotide internal loops (5'CCUG/3'GUCC). We designed multivalent compounds that improve DM2-associated defects using information about RNA-small molecule interactions. We also report the first crystal structure of r(CCUG) repeats refined to 2.35 Å. Structural analysis of the three 5'CCUG/3'GUCC repeat internal loops (L) reveals that the CU pairs in L1 are each stabilized by one hydrogen bond and a water-mediated hydrogen bond, while CU pairs in L2 and L3 are stabilized by two hydrogen bonds. Molecular dynamics (MD) simulations reveal that the CU pairs are dynamic and stabilized by Na(+) and water molecules. MD simulations of the binding of the small molecule to r(CCUG) repeats reveal that the lowest free energy binding mode occurs via the major groove, in which one C residue is unstacked and the cross-strand nucleotides are displaced. Moreover, we modeled the binding of our dimeric compound to two 5'CCUG/3'GUCC motifs, which shows that the scaffold on which the RNA-binding modules are displayed provides an optimal distance to span two adjacent loops.

  14. Structure of the Myotonic Dystrophy Type 2 RNA and Designed Small Molecules That Reduce Toxicity

    PubMed Central

    Park, HaJeung; Lohman, Jeremy R.; Guan, Lirui; Tran, Tuan; Sarkar, Partha; Schatz, George C.; Disney, Matthew D.

    2014-01-01

    Myotonic dystrophy type 2 (DM2) is an untreatable neuromuscular disorder caused by a r(CCUG) expansion (r(CCUG)exp) that folds into an extended hairpin with periodically repeating 2×2 nucleotide internal loops (5’CCUG/3’GUCC). We designed multivalent compounds that improve DM2-associated defects using information about RNA-small molecule interactions. We also report the first crystal structure of r(CCUG)exp refined to 2.35 Å. Structural analysis of the three 5’CCUG/3’GUCC repeat internal loops (L) reveals that the CU pairs in L1 are each stabilized by one hydrogen bond and a water-mediated hydrogen bond while CU pairs in L2 and L3 are stabilized by two hydrogen bonds. Molecular dynamics (MD) simulations reveal that the CU pairs are dynamic and stabilized by Na+ and water molecules. MD simulations of the binding of the small molecule to r(CCUG) repeats reveal that the lowest free energy binding mode occurs via the major groove, in which one C residue is unstacked and the cross-strand nucleotides are displaced. Moreover, we modeled the binding of our dimeric compound to two 5’CCUG/3’GUCC motifs, which shows that the scaffold on which the RNA-binding modules are displayed provides an optimal distance to span two adjacent loops. PMID:24341895

  15. Control Parameters Optimization Based on Co-Simulation of a Mechatronic System for an UA-Based Two-Axis Inertially Stabilized Platform.

    PubMed

    Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao

    2015-08-14

    This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS(®); then, to analyze the system's kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB(®) SIMULINK(®) controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.

  16. Control Parameters Optimization Based on Co-Simulation of a Mechatronic System for an UA-Based Two-Axis Inertially Stabilized Platform

    PubMed Central

    Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao

    2015-01-01

    This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance. PMID:26287210

  17. Energy simulation and optimization for a small commercial building through Modelica

    NASA Astrophysics Data System (ADS)

    Rivas, Bryan

    Small commercial buildings make up the majority of buildings in the United States. Energy consumed by these buildings is expected to drastically increase in the next few decades, with a large percentage of the energy consumed attributed to cooling systems. This work presents the simulation and optimization of a thermostat schedule to minimize energy consumption in a small commercial building test bed during the cooling season. The simulation occurs through the use of the multi-engineering domain Dymola environment based on the Modelica open source programming language and is optimized with the Java based optimization program GenOpt. The simulation uses both physically based modeling utilizing heat transfer principles for the building and regression analysis for energy consumption. GenOpt is dynamically coupled to Dymola through various interface files. There are very few studies that have coupled GenOpt to a building simulation program and even fewer studies have used Dymola for building simulation as extensively as the work presented here. The work presented proves Dymola as a viable alternative to other building simulation programs such as EnergyPlus and MatLab. The model developed is used to simulate the energy consumption of a test bed, a commissioned real world small commercial building, while maintaining indoor thermal comfort. Potential applications include smart or intelligent building systems, predictive simulation of small commercial buildings, and building diagnostics.

  18. Facilitators on networks reveal optimal interplay between information exchange and reciprocity.

    PubMed

    Szolnoki, Attila; Perc, Matjaž; Mobilia, Mauro

    2014-04-01

    Reciprocity is firmly established as an important mechanism that promotes cooperation. An efficient information exchange is likewise important, especially on structured populations, where interactions between players are limited. Motivated by these two facts, we explore the role of facilitators in social dilemmas on networks. Facilitators are here mirrors to their neighbors-they cooperate with cooperators and defect with defectors-but they do not participate in the exchange of strategies. As such, in addition to introducing direct reciprocity, they also obstruct information exchange. In well-mixed populations, facilitators favor the replacement and invasion of defection by cooperation as long as their number exceeds a critical value. In structured populations, on the other hand, there exists a delicate balance between the benefits of reciprocity and the deterioration of information exchange. Extensive Monte Carlo simulations of social dilemmas on various interaction networks reveal that there exists an optimal interplay between reciprocity and information exchange, which sets in only when a small number of facilitators occupy the main hubs of the scale-free network. The drawbacks of missing cooperative hubs are more than compensated for by reciprocity and, at the same time, the compromised information exchange is routed via the auxiliary hubs with only marginal losses in effectivity. These results indicate that it is not always optimal for the main hubs to become leaders of the masses, but rather to exploit their highly connected state to promote tit-for-tat-like behavior.

  19. Facilitators on networks reveal optimal interplay between information exchange and reciprocity

    NASA Astrophysics Data System (ADS)

    Szolnoki, Attila; Perc, Matjaž; Mobilia, Mauro

    2014-04-01

    Reciprocity is firmly established as an important mechanism that promotes cooperation. An efficient information exchange is likewise important, especially on structured populations, where interactions between players are limited. Motivated by these two facts, we explore the role of facilitators in social dilemmas on networks. Facilitators are here mirrors to their neighbors—they cooperate with cooperators and defect with defectors—but they do not participate in the exchange of strategies. As such, in addition to introducing direct reciprocity, they also obstruct information exchange. In well-mixed populations, facilitators favor the replacement and invasion of defection by cooperation as long as their number exceeds a critical value. In structured populations, on the other hand, there exists a delicate balance between the benefits of reciprocity and the deterioration of information exchange. Extensive Monte Carlo simulations of social dilemmas on various interaction networks reveal that there exists an optimal interplay between reciprocity and information exchange, which sets in only when a small number of facilitators occupy the main hubs of the scale-free network. The drawbacks of missing cooperative hubs are more than compensated for by reciprocity and, at the same time, the compromised information exchange is routed via the auxiliary hubs with only marginal losses in effectivity. These results indicate that it is not always optimal for the main hubs to become leaders of the masses, but rather to exploit their highly connected state to promote tit-for-tat-like behavior.

  20. Applications of New Surrogate Global Optimization Algorithms including Efficient Synchronous and Asynchronous Parallelism for Calibration of Expensive Nonlinear Geophysical Simulation Models.

    NASA Astrophysics Data System (ADS)

    Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.

    2016-12-01

    New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.

  1. Improved Persistent Scatterer analysis using Amplitude Dispersion Index optimization of dual polarimetry data

    NASA Astrophysics Data System (ADS)

    Esmaeili, Mostafa; Motagh, Mahdi

    2016-07-01

    Time-series analysis of Synthetic Aperture Radar (SAR) data using the two techniques of Small BAseline Subset (SBAS) and Persistent Scatterer Interferometric SAR (PSInSAR) extends the capability of conventional interferometry technique for deformation monitoring and mitigating many of its limitations. Using dual/quad polarized data provides us with an additional source of information to improve further the capability of InSAR time-series analysis. In this paper we use dual-polarized data and combine the Amplitude Dispersion Index (ADI) optimization of pixels with phase stability criterion for PSInSAR analysis. ADI optimization is performed by using Simulated Annealing algorithm to increase the number of Persistent Scatterer Candidate (PSC). The phase stability of PSCs is then measured using their temporal coherence to select the final sets of pixels for deformation analysis. We evaluate the method for a dataset comprising of 17 dual polarization SAR data (HH/VV) acquired by TerraSAR-X data from July 2013 to January 2014 over a subsidence area in Iran and compare the effectiveness of the method for both agricultural and urban regions. The results reveal that using optimum scattering mechanism decreases the ADI values in urban and non-urban regions. As compared to single-pol data the use of optimized polarization increases initially the number of PSCs by about three times and improves the final PS density by about 50%, in particular in regions with high rate of deformation which suffer from losing phase stability over the time. The classification of PS pixels based on their optimum scattering mechanism revealed that the dominant scattering mechanism of the PS pixels in the urban area is double-bounce while for the non-urban regions (ground surfaces and farmlands) it is mostly single-bounce mechanism.

  2. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  3. Steady state and transient simulation of anion exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Dekel, Dario R.; Rasin, Igal G.; Page, Miles; Brandon, Simon

    2018-01-01

    We present a new model for anion exchange membrane fuel cells. Validation against experimental polarization curve data is obtained for current densities ranging from zero to above 2 A cm-2. Experimental transient data is also successfully reproduced. The model is very flexible and can be used to explore the system's sensitivity to a wide range of material properties, cell design specifications, and operating parameters. We demonstrate the impact of gas inlet relative humidity (RH), operating current density, ionomer loading and ionomer ion exchange capacity (IEC) values on cell performance. In agreement with the literature, high air RH levels are shown to improve cell performance. At high current densities (>1 A cm-2) this effect is observed to be especially significant. Simulated hydration number distributions across the cell reveal the related critical dependence of cathode hydration on air RH and current density values. When exploring catalyst layer design, optimal intermediate ionomer loading values are demonstrated. The benefits of asymmetric (cathode versus anode) electrode design are revealed, showing enhanced performance using higher cathode IEC levels. Finally, electrochemical reaction profiles across the electrodes uncover inhomogeneous catalyst utilization. Specifically, at high current densities the cathodic reaction is confined to a narrow region near the membrane.

  4. Clinical simulation training improves the clinical performance of Chinese medical students

    PubMed Central

    Zhang, Ming-ya; Cheng, Xin; Xu, An-ding; Luo, Liang-ping; Yang, Xuesong

    2015-01-01

    Background Modern medical education promotes medical students’ clinical operating capacity rather than the mastery of theoretical knowledge. To accomplish this objective, clinical skill training using various simulations was introduced into medical education to cultivate creativity and develop the practical ability of students. However, quantitative analysis of the efficiency of clinical skill training with simulations is lacking. Methods In the present study, we compared the mean scores of medical students (Jinan University) who graduated in 2013 and 2014 on 16 stations between traditional training (control) and simulative training groups. In addition, in a clinical skill competition, the objective structured clinical examination (OSCE) scores of participating medical students trained using traditional and simulative training were compared. The data were statistically analyzed and qualitatively described. Results The results revealed that simulative training could significantly enhance the graduate score of medical students compared with the control. The OSCE scores of participating medical students in the clinical skill competition, trained using simulations, were dramatically higher than those of students trained through traditional methods, and we also observed that the OSCE marks were significantly increased for the same participant after simulative training for the clinical skill competition. Conclusions Taken together, these data indicate that clinical skill training with a variety of simulations could substantially promote the clinical performance of medical students and optimize the resources used for medical education, although a precise analysis of each specialization is needed in the future. PMID:26478142

  5. An improved simulation of the 2015 El Niño event by optimally correcting the initial conditions and model parameters in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan

    2017-09-01

    Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.

  6. Swing-leg trajectory of running guinea fowl suggests task-level priority of force regulation rather than disturbance rejection.

    PubMed

    Blum, Yvonne; Vejdani, Hamid R; Birn-Jeffery, Aleksandra V; Hubicki, Christian M; Hurst, Jonathan W; Daley, Monica A

    2014-01-01

    To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain.

  7. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  8. Swing-Leg Trajectory of Running Guinea Fowl Suggests Task-Level Priority of Force Regulation Rather than Disturbance Rejection

    PubMed Central

    Blum, Yvonne; Vejdani, Hamid R.; Birn-Jeffery, Aleksandra V.; Hubicki, Christian M.; Hurst, Jonathan W.; Daley, Monica A.

    2014-01-01

    To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain. PMID:24979750

  9. Combining Simulation Tools for End-to-End Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan; Gutkowski, Jeffrey; Craig, Scott; Dawn, Tim; Williams, Jacobs; Stein, William B.; Litton, Daniel; Lugo, Rafael; Qu, Min

    2015-01-01

    Trajectory simulations with advanced optimization algorithms are invaluable tools in the process of designing spacecraft. Due to the need for complex models, simulations are often highly tailored to the needs of the particular program or mission. NASA's Orion and SLS programs are no exception. While independent analyses are valuable to assess individual spacecraft capabilities, a complete end-to-end trajectory from launch to splashdown maximizes potential performance and ensures a continuous solution. In order to obtain end-to-end capability, Orion's in-space tool (Copernicus) was made to interface directly with the SLS's ascent tool (POST2) and a new tool to optimize the full problem by operating both simulations simultaneously was born.

  10. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  11. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  12. Structural Performance’s Optimally Analysing and Implementing Based on ANSYS Technology

    NASA Astrophysics Data System (ADS)

    Han, Na; Wang, Xuquan; Yue, Haifang; Sun, Jiandong; Wu, Yongchun

    2017-06-01

    Computer-aided Engineering (CAE) is a hotspot both in academic field and in modern engineering practice. Analysis System(ANSYS) simulation software for its excellent performance become outstanding one in CAE family, it is committed to the innovation of engineering simulation to help users to shorten the design process, improve product innovation and performance. Aimed to explore a structural performance’s optimally analyzing model for engineering enterprises, this paper introduced CAE and its development, analyzed the necessity for structural optimal analysis as well as the framework of structural optimal analysis on ANSYS Technology, used ANSYS to implement a reinforced concrete slab structural performance’s optimal analysis, which was display the chart of displacement vector and the chart of stress intensity. Finally, this paper compared ANSYS software simulation results with the measured results,expounded that ANSYS is indispensable engineering calculation tools.

  13. Simulated annealing in orbital flight planning

    NASA Technical Reports Server (NTRS)

    Soller, Jeffrey

    1990-01-01

    Simulated annealing is used to solve a minimum fuel trajectory problem in the space station environment. The environment is unique because the space station will define the first true multivehicle environment in space. The optimization yields surfaces which are potentially complex, with multiple local minima. Because of the likelihood of these local minima, descent techniques are unable to offer robust solutions. Other deterministic optimization techniques were explored without success. The simulated annealing optimization is capable of identifying a minimum-fuel, two-burn trajectory subject to four constraints. Furthermore, the computational efforts involved in the optimization are such that missions could be planned on board the space station. Potential applications could include the on-site planning of rendezvous with a target craft of the emergency rescue of an astronaut. Future research will include multiwaypoint maneuvers, using a knowledge base to guide the optimization.

  14. Design of simulated moving bed for separation of fumaric acid with a little fronting phenomenon.

    PubMed

    Choi, Jae-Hwan; Kang, Mun-Seok; Lee, Chung-Gi; Wang, Nien-Hwa Linda; Mun, Sungyong

    2017-03-31

    The production of fumaric acid through a biotechnological pathway has grown in importance because of its potential value in related industries. This has sparked an interest in developing an economically-efficient process for separation of fumaric acid (product of interest) from acetic acid (by-product). This study aimed to develop a simulated moving bed (SMB) chromatographic process for such separation in a systematic way. As a first step for this work, commercially available adsorbents were screened for their applicability to the considered separation, which revealed that an Amberchrom-CG71C resin had a sufficient potential to become an adsorbent of the targeted SMB. Using this adsorbent, the intrinsic parameters of fumaric and acetic acids were determined and then applied to optimizing the SMB process under consideration. The optimized SMB process was tested experimentally, from which the yield of fumaric-acid product was found to become lower than expected in the design. An investigation about the reason for such problem revealed that it was attributed to a fronting phenomenon occurring in the solute band of fumaric acid. To resolve this issue, the extent of the fronting was evaluated quantitatively using an experimental axial dispersion coefficient for fumaric acid, which was then considered in the design of the SMB of interest. The SMB experimental results showed that the SMB design based on the consideration of the fumaric-acid fronting could guarantee the attainment of both high purity (>99%) and high yield (>99%) for fumaric-acid product under the desorbent consumption of 2.6 and the throughput of 0.36L/L/h. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Is Municipal Solid Waste Recycling Economically Efficient?

    NASA Astrophysics Data System (ADS)

    Lavee, Doron

    2007-12-01

    It has traditionally been argued that recycling municipal solid waste (MSW) is usually not economically viable and that only when externalities, long-term dynamic considerations, and/or the entire product life cycle are taken into account, recycling becomes worthwhile from a social point of view. This article explores the results of a wide study conducted in Israel in the years 2000 2004. Our results reveal that recycling is optimal more often than usually claimed, even when externality considerations are ignored. The study is unique in the tools it uses to explore the efficiency of recycling: a computer-based simulation applied to an extensive database. We developed a simulation for assessing the costs of handling and treating MSW under different waste-management systems and used this simulation to explore possible cost reductions obtained by designating some of the waste (otherwise sent to landfill) to recycling. We ran the simulation on data from 79 municipalities in Israel that produce over 60% of MSW in Israel. For each municipality, we were able to arrive at an optimal method of waste management and compare the costs associated with 100% landfilling to the costs born by the municipality when some of the waste is recycled. Our results indicate that for 51% of the municipalities, it would be efficient to adopt recycling, even without accounting for externality costs. We found that by adopting recycling, municipalities would be able to reduce direct costs by an average of 11%. Through interviews conducted with representatives of municipalities, we were also able to identify obstacles to the utilization of recycling, answering in part the question of why actual recycling levels in Israel are lower than our model predicts they should be.

  16. Determining β-lactam exposure threshold to suppress resistance development in Gram-negative bacteria.

    PubMed

    Tam, Vincent H; Chang, Kai-Tai; Zhou, Jian; Ledesma, Kimberly R; Phe, Kady; Gao, Song; Van Bambeke, Françoise; Sánchez-Díaz, Ana María; Zamorano, Laura; Oliver, Antonio; Cantón, Rafael

    2017-05-01

    β-Lactams are commonly used for nosocomial infections and resistance to these agents among Gram-negative bacteria is increasing rapidly. Optimized dosing is expected to reduce the likelihood of resistance development during antimicrobial therapy, but the target for clinical dose adjustment is not well established. We examined the likelihood that various dosing exposures would suppress resistance development in an in vitro hollow-fibre infection model. Two strains of Klebsiella pneumoniae and two strains of Pseudomonas aeruginosa (baseline inocula of ∼10 8  cfu/mL) were examined. Various dosing exposures of cefepime, ceftazidime and meropenem were simulated in the hollow-fibre infection model. Serial samples were obtained to ascertain the pharmacokinetic simulations and viable bacterial burden for up to 120 h. Drug concentrations were determined by a validated LC-MS/MS assay and the simulated exposures were expressed as C min /MIC ratios. Resistance development was detected by quantitative culture on drug-supplemented media plates (at 3× the corresponding baseline MIC). The C min /MIC breakpoint threshold to prevent bacterial regrowth was identified by classification and regression tree (CART) analysis. For all strains, the bacterial burden declined initially with the simulated exposures, but regrowth was observed in 9 out of 31 experiments. CART analysis revealed that a C min /MIC ratio ≥3.8 was significantly associated with regrowth prevention (100% versus 44%, P  = 0.001). The development of β-lactam resistance during therapy could be suppressed by an optimized dosing exposure. Validation of the proposed target in a well-designed clinical study is warranted. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Optimization of a Radiative Transfer Forward Operator for Simulating SMOS Brightness Temperatures over the Upper Mississippi Basin, USA

    NASA Technical Reports Server (NTRS)

    Lievens, H.; Verhoest, N. E. C.; Martens, B.; VanDenBerg, M. J.; Bitar, A. Al; Tomer, S. Kumar; Merlin, O.; Cabot, F.; Kerr, Y.; DeLannoy, G. J. M.; hide

    2014-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite mission is routinely providing global multi-angular observations of brightness temperature (TB) at both horizontal and vertical polarization with a 3-day repeat period. The assimilation of such data into a land surface model (LSM) may improve the skill of operational flood forecasts through an improved estimation of soil moisture (SM). To accommodate for the direct assimilation of the SMOS TB data, the LSM needs to be coupled with a radiative transfer model (RTM), serving as a forward operator for the simulation of multi-angular and multi-polarization top of atmosphere TBs. This study investigates the use of the Variable Infiltration Capacity (VIC) LSM coupled with the Community Microwave Emission Modelling platform (CMEM) for simulating SMOS TB observations over the Upper Mississippi basin, USA. For a period of 2 years (2010-2011), a comparison between SMOS TBs and simulations with literature-based RTM parameters reveals a basin averaged bias of 30K. Therefore, time series of SMOS TB observations are used to investigate ways for mitigating these large biases. Specifically, the study demonstrates the impact of the LSM soil moisture climatology in the magnitude of TB biases. After CDF matching the SM climatology of the LSM to SMOS retrievals, the average bias decreases from 30K to less than 5K. Further improvements can be made through calibration of RTM parameters related to the modeling of surface roughness and vegetation. Consequently, it can be concluded that SM rescaling and RTM optimization are efficient means for mitigating biases and form a necessary preparatory step for data assimilation.

  18. Trait-based Modeling Reveals How Plankton Biodiversity-Ecosystem Function (BEF) Relationships Depend on Environmental Variability

    NASA Astrophysics Data System (ADS)

    Smith, S. L.; Chen, B.; Vallina, S. M.

    2017-12-01

    Biodiversity-Ecosystem Function (BEF) relationships, which are most commonly quantified in terms of productivity or total biomass yield, are known to depend on the timescale of the experiment or field study, both for terrestrial plants and phytoplankton, which have each been widely studied as model ecosystems. Although many BEF relationships are positive (i.e., increasing biodiversity enhances function), in some cases there is an optimal intermediate diversity level (i.e., a uni-modal relationship), and in other cases productivity decreases with certain measures of biodiversity. These differences in BEF relationships cannot be reconciled merely by differences in the timescale of experiments. We will present results from simulation experiments applying recently developed trait-based models of phytoplankton communities and ecosystems, using the `adaptive dynamics' framework to represent continuous distributions of size and other key functional traits. Controlled simulation experiments were conducted with different levels of phytoplankton size-diversity, which through trait-size correlations implicitly represents functional-diversity. One recent study applied a theoretical box model for idealized simulations at different frequencies of disturbance. This revealed how the shapes of BEF relationships depend systematically on the frequency of disturbance and associated nutrient supply. We will also present more recent results obtained using a trait-based plankton ecosystem model embedded in a three-dimensional ocean model applied to the North Pacific. This reveals essentially the same pattern in a spatially explicit model with more realistic environmental forcing. In the relatively more variable subarctic, productivity tends to increase with the size (and hence functional) diversity of phytoplankton, whereas productivity tends to decrease slightly with increasing size-diversity in the relatively calm subtropics. Continuous trait-based models can capture essential features of BEF relationships, while requiring far fewer calculations compared to typical plankton diversity models that explicitly simulate a great many idealized species.

  19. Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja

    2016-04-01

    We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.

  20. Analysis of EnergyPlus for use in residential building energy optimization

    NASA Astrophysics Data System (ADS)

    Spencer, Justin S.

    This work explored the utility of EnergyPlus as a simulation engine for doing residential building energy optimization, with the objective of finding the modeling areas that require further development in EnergyPlus for residential optimization applications. This work was conducted primarily during 2006-2007, with publication occurring later in 2010. The assessments and recommendations apply to the simulation tool versions available in 2007. During this work, an EnergyPlus v2.0 (2007) input file generator was developed for use in BEopt 0.8.0.4 (2007). BEopt 0.8.0.4 is a residential Building Energy optimization program developed at the National Renewable Energy Laboratory in Golden, Colorado. Residential modeling capabilities of EnergyPlus v2.0 were scrutinized and tested. Modeling deficiencies were identified in a number of areas. These deficiencies were compared to deficiencies in the DOE2.2 V44E4(2007)/TRNSYS simulation engines. The highest priority gaps in EnergyPlus v2.0's residential modeling capability are in infiltration, duct leakage, and foundation modeling. Optimization results from DOE2.2 V44E4 and EnergyPlus v2.0 were analyzed to search for modeling differences that have a significant impact on optimization results. Optimal buildings at different energy savings levels were compared to look for biases. It was discovered that the EnergyPlus v2.0 optimizations consistently chose higher wall insulation levels than the DOE2.2 V44E4 optimizations. The points composing the optimal paths chosen by DOE2.2 V44E4 and EnergyPlus v2.0 were compared to look for points chosen by one optimization that were significantly different from the other optimal path. These outliers were compared to consensus optimal points to determine the simulation differences that cause disparities in the optimization results. The differences were primarily caused by modeling of window radiation exchange and HVAC autosizing.

  1. Efficient Characterization of Protein Cavities within Molecular Simulation Trajectories: trj_cavity.

    PubMed

    Paramo, Teresa; East, Alexandra; Garzón, Diana; Ulmschneider, Martin B; Bond, Peter J

    2014-05-13

    Protein cavities and tunnels are critical in determining phenomena such as ligand binding, molecular transport, and enzyme catalysis. Molecular dynamics (MD) simulations enable the exploration of the flexibility and conformational plasticity of protein cavities, extending the information available from static experimental structures relevant to, for example, drug design. Here, we present a new tool (trj_cavity) implemented within the GROMACS ( www.gromacs.org ) framework for the rapid identification and characterization of cavities detected within MD trajectories. trj_cavity is optimized for usability and computational efficiency and is applicable to the time-dependent analysis of any cavity topology, and optional specialized descriptors can be used to characterize, for example, protein channels. Its novel grid-based algorithm performs an efficient neighbor search whose calculation time is linear with system size, and a comparison of performance with other widely used cavity analysis programs reveals an orders-of-magnitude improvement in the computational cost. To demonstrate its potential for revealing novel mechanistic insights, trj_cavity has been used to analyze long-time scale simulation trajectories for three diverse protein cavity systems. This has helped to reveal, respectively, the lipid binding mechanism in the deep hydrophobic cavity of a soluble mite-allergen protein, Der p 2; a means for shuttling carbohydrates between the surface-exposed substrate-binding and catalytic pockets of a multidomain, membrane-proximal pullulanase, PulA; and the structural basis for selectivity in the transmembrane pore of a voltage-gated sodium channel (NavMs), embedded within a lipid bilayer environment. trj_cavity is available for download under an open-source license ( http://sourceforge.net/projects/trjcavity ). A simplified, GROMACS-independent version may also be compiled.

  2. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  3. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  4. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks

    PubMed Central

    Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal

    2015-01-01

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191

  5. Optimization of a pressure control valve for high power automatic transmission considering stability

    NASA Astrophysics Data System (ADS)

    Jian, Hongchao; Wei, Wei; Li, Hongcai; Yan, Qingdong

    2018-02-01

    The pilot-operated electrohydraulic clutch-actuator system is widely utilized by high power automatic transmission because of the demand of large flowrate and the excellent pressure regulating capability. However, a self-excited vibration induced by the inherent non-linear characteristics of valve spool motion coupled with the fluid dynamics can be generated during the working state of hydraulic systems due to inappropriate system parameters, which causes sustaining instability in the system and leads to unexpected performance deterioration and hardware damage. To ensure a stable and fast response performance of the clutch actuator system, an optimal design method for the pressure control valve considering stability is proposed in this paper. A non-linear dynamic model of the clutch actuator system is established based on the motion of the valve spool and coupling fluid dynamics in the system. The stability boundary in the parameter space is obtained by numerical stability analysis. Sensitivity of the stability boundary and output pressure response time corresponding to the valve parameters are identified using design of experiment (DOE) approach. The pressure control valve is optimized using particle swarm optimization (PSO) algorithm with the stability boundary as constraint. The simulation and experimental results reveal that the optimization method proposed in this paper helps in improving the response characteristics while ensuring the stability of the clutch actuator system during the entire gear shift process.

  6. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks.

    PubMed

    Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal

    2015-08-13

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.

  7. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  8. Simulating carbon and water fluxes at Arctic and boreal ecosystems in Alaska by optimizing the modified BIOME-BGC with eddy covariance data

    NASA Astrophysics Data System (ADS)

    Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.

    2013-12-01

    To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.

  9. Automatic optimization of well locations in a North Sea fractured chalk reservoir using a front tracking reservoir simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rian, D.T.; Hage, A.

    1994-12-31

    A numerical simulator is often used as a reservoir management tool. One of its main purposes is to aid in the evaluation of number of wells, well locations and start time for wells. Traditionally, the optimization of a field development is done by a manual trial and error process. In this paper, an example of an automated technique is given. The core in the automization process is the reservoir simulator Frontline. Frontline is based on front tracking techniques, which makes it fast and accurate compared to traditional finite difference simulators. Due to its CPU-efficiency the simulator has been coupled withmore » an optimization module, which enables automatic optimization of location of wells, number of wells and start-up times. The simulator was used as an alternative method in the evaluation of waterflooding in a North Sea fractured chalk reservoir. Since Frontline, in principle, is 2D, Buckley-Leverett pseudo functions were used to represent the 3rd dimension. The area full field simulation model was run with up to 25 wells for 20 years in less than one minute of Vax 9000 CPU-time. The automatic Frontline evaluation indicated that a peripheral waterflood could double incremental recovery compared to a central pattern drive.« less

  10. Design of a force reflecting hand controller for space telemanipulation studies

    NASA Technical Reports Server (NTRS)

    Paines, J. D. B.

    1987-01-01

    The potential importance of space telemanipulator systems is reviewed, along with past studies of master-slave manipulation using a generalized force reflecting master arm. Problems concerning their dynamic interaction with the human operator have been revealed in the use of these systems, with marked differences between 1-g and simulated weightless conditions. A study is outlined to investigate the optimization of the man machine dynamics of master-slave manipulation, and a set of specifications is determined for the apparatus necessary to perform this investigation. This apparatus is a one degree of freedom force reflecting hand controller with closed loop servo control which enables it to simulate arbitrary dynamic properties to high bandwidth. Design of the complete system and its performance is discussed. Finally, the experimental adjustment of the hand controller dynamics for smooth manual control performance with good operator force perception is described, resulting in low inertia, viscously damped hand controller dynamics.

  11. Interaction between control and design of a SHARON reactor: economic considerations in a plant-wide (BSM2) context.

    PubMed

    Volcke, E I P; van Loosdrecht, M C M; Vanrolleghem, P A

    2007-01-01

    The combined SHARON-Anammox process is a promising technique for nitrogen removal from wastewater streams with high ammonium concentrations. It is typically applied to sludge digestion reject water, in order to relieve the activated sludge tanks, to which this stream is typically recycled. This contribution assesses the impact of the applied control strategy in the SHARON-reactor, both on the effluent quality of the subsequent Anammox reactor as well as on the plant-wide level by means of an operating cost index. Moreover, it is investigated to which extent the usefulness of a certain control strategy depends on the reactor design (volume). A simulation study is carried out using the plant-wide Benchmark Simulation Model no. 2 (BSM2), extended with the SHARON and Anammox processes. The results reveal a discrepancy between optimizing the reject water treatment performance and minimizing plant-wide operating costs.

  12. Sonoporation at Small and Large Length Scales: Effect of Cavitation Bubble Collapse on Membranes.

    PubMed

    Fu, Haohao; Comer, Jeffrey; Cai, Wensheng; Chipot, Christophe

    2015-02-05

    Ultrasound has emerged as a promising means to effect controlled delivery of therapeutic agents through cell membranes. One possible mechanism that explains the enhanced permeability of lipid bilayers is the fast contraction of cavitation bubbles produced on the membrane surface, thereby generating large impulses, which, in turn, enhance the permeability of the bilayer to small molecules. In the present contribution, we investigate the collapse of bubbles of different diameters, using atomistic and coarse-grained molecular dynamics simulations to calculate the force exerted on the membrane. The total impulse can be computed rigorously in numerical simulations, revealing a superlinear dependence of the impulse on the radius of the bubble. The collapse affects the structure of a nearby immobilized membrane, and leads to partial membrane invagination and increased water permeation. The results of the present study are envisioned to help optimize the use of ultrasound, notably for the delivery of drugs.

  13. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

  14. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184

  15. Optimal fabrication processes for unidirectional metal-matrix composites: A computational simulation

    NASA Technical Reports Server (NTRS)

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with non-linear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  16. Optimal fabrication processes for unidirectional metal-matrix composites - A computational simulation

    NASA Technical Reports Server (NTRS)

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with nonlinear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  17. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  18. Solar Sail Spaceflight Simulation

    NASA Technical Reports Server (NTRS)

    Lisano, Michael; Evans, James; Ellis, Jordan; Schimmels, John; Roberts, Timothy; Rios-Reyes, Leonel; Scheeres, Daniel; Bladt, Jeff; Lawrence, Dale; Piggott, Scott

    2007-01-01

    The Solar Sail Spaceflight Simulation Software (S5) toolkit provides solar-sail designers with an integrated environment for designing optimal solar-sail trajectories, and then studying the attitude dynamics/control, navigation, and trajectory control/correction of sails during realistic mission simulations. Unique features include a high-fidelity solar radiation pressure model suitable for arbitrarily-shaped solar sails, a solar-sail trajectory optimizer, capability to develop solar-sail navigation filter simulations, solar-sail attitude control models, and solar-sail high-fidelity force models.

  19. Morphologic and Aerodynamic Considerations Regarding the Plumed Seeds of Tragopogon pratensis and Their Implications for Seed Dispersal.

    PubMed

    Casseau, Vincent; De Croon, Guido; Izzo, Dario; Pandolfi, Camilla

    2015-01-01

    Tragopogon pratensis is a small herbaceous plant that uses wind as the dispersal vector for its seeds. The seeds are attached to parachutes that increase the aerodynamic drag force and increase the total distance travelled. Our hypothesis is that evolution has carefully tuned the air permeability of the seeds to operate in the most convenient fluid dynamic regime. To achieve final permeability, the primary and secondary fibres of the pappus have evolved with complex weaving; this maximises the drag force (i.e., the drag coefficient), and the pappus operates in an "optimal" state. We used computational fluid dynamics (CFD) simulations to compute the seed drag coefficient and compare it with data obtained from drop experiments. The permeability of the parachute was estimated from microscope images. Our simulations reveal three flow regimes in which the parachute can operate according to its permeability. These flow regimes impact the stability of the parachute and its drag coefficient. From the permeability measurements and drop experiments, we show how the seeds operate very close to the optimal case. The porosity of the textile appears to be an appropriate solution to achieve a lightweight structure that allows a low terminal velocity, a stable flight and a very efficient parachute for the velocity at which it operates.

  20. Dealing with Multiple Solutions in Structural Vector Autoregressive Models.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2016-01-01

    Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.

  1. Spectroscopic and molecular docking studies on N,N-di-tert-butoxycarbonyl (Boc)-2-amino pyridine: A potential bioactive agent for lung cancer treatment

    NASA Astrophysics Data System (ADS)

    Mohamed Asath, R.; Premkumar, R.; Mathavan, T.; Milton Franklin Benial, A.

    2017-09-01

    Potential energy surface scan was performed and the most stable molecular structure of the N,N-di-tert-butoxycarbonyl (Boc)-2-amino pyridine (DBAP) molecule was predicted. The most stable molecular structure of the molecule was optimized using B3LYP method with cc-pVTZ basis set. Anticancer activity of the DBAP molecule was evaluated by molecular docking analysis. The structural parameters and vibrational wavenumbers were calculated for the optimized molecular structure. The experimental and theoretical wavenumbers were assigned and compared. Ultraviolet-Visible spectrum was simulated and validated experimentally. The molecular electrostatic potential surface was simulated and Fukui function calculations were also carried out to investigate the reactive nature of the DBAP molecule. The natural bond orbital analysis was also performed to probe the intramolecular interactions and confirm the bioactivity of the DBAP molecule. The molecular docking analysis reveals the better inhibitory nature of the DBAP molecule against the epidermal growth factor receptor (EGFR) protein which causes lung cancer. Hence, the present study unveils the structural and bioactive nature of the title molecule. The DBAP molecule was identified as a potential inhibitor against the lung cancer which may be useful in further development of drug designing in the treatment of lung cancer.

  2. Optimizing purification process of MIM-I-BAR domain by introducing atomic force microscope and dynamics simulations.

    PubMed

    Zhang, Yue; Lou, Zhichao; Lin, Xubo; Wang, Qiwei; Cao, Meng; Gu, Ning

    2017-09-01

    MIM (missing in metastasis) is a member of I-BAR (inverse BAR) domain protein family, which functions as a putative metastasis suppressor. However, methods of gaining high purity MIM-I-BAR protein are barely reported. Here, by optimizing the purification process including changing the conditions of cell lysate and protein elution, we successfully purified MIM protein. The purity of the obtained protein was up to ∼90%. High-resolution atomic force microscope (AFM) provides more visual images, ensuring that we can observe the microenvironment around the target protein, as well as the conformations of the purification products following each purification process. MIM protein with two different sizes were observed on mica surface with AFM. Combining with molecular dynamics simulations, these molecules were revealed as MIM monomer and dimer. Furthermore, our study attaches importance to the usage of imidazole with suitable concentrations during the affinity chromatography process, as well as the removal of excessive imidazole after the affinity chromatography process. All these results indicate that the method described here was successful in purifying MIM protein and maintaining their natural properties, and is supposed to be used to purify other proteins with low solubility. Copyright © 2017. Published by Elsevier B.V.

  3. Nitrogen vacancy, self-interstitial diffusion, and Frenkel-pair formation/dissociation in B 1 TiN studied by ab initio and classical molecular dynamics with optimized potentials

    NASA Astrophysics Data System (ADS)

    Sangiovanni, D. G.; Alling, B.; Steneteg, P.; Hultman, L.; Abrikosov, I. A.

    2015-02-01

    We use ab initio and classical molecular dynamics (AIMD and CMD) based on the modified embedded-atom method (MEAM) potential to simulate diffusion of N vacancy and N self-interstitial point defects in B 1 TiN. TiN MEAM parameters are optimized to obtain CMD nitrogen point-defect jump rates in agreement with AIMD predictions, as well as an excellent description of Ti Nx(˜0.7

  4. Signatures of a globally optimal searching strategy in the three-dimensional foraging flights of bumblebees

    NASA Astrophysics Data System (ADS)

    Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.

    2016-07-01

    Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards.

  5. An Approach for Practical Use of Common-Mode Noise Reduction Technique for In-Vehicle Electronic Equipment

    NASA Astrophysics Data System (ADS)

    Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atsushi; Okazaki, Yuji; Asai, Hideki

    In this paper, we studied the use of common-mode noise reduction technique for in-vehicle electronic equipment in an actual instrument design. We have improved the circuit model of the common-mode noise that flows to the wire harness to add the effect of a bypass capacitor located near the LSI. We analyzed the improved circuit model using a circuit simulator and verified the effectiveness of the noise reduction condition derived from the circuit model. It was also confirmed that offsetting the impedance mismatch in the PCB section requires to make a circuit constant larger than that necessary for doing the impedance mismatch in the LSI section. An evaluation circuit board comprising an automotive microcomputer was prototyped to experiment on the common-mode noise reduction effect of the board. The experimental results showed the noise reduction effect of the board. The experimental results also revealed that the degree of impedance mismatch in the LSI section can be estimated by using a PCB having a known impedance. We further inquired into the optimization of impedance parameters, which is difficult for actual products at present. To satisfy the noise reduction condition composed of numerous parameters, we proposed a design method using an optimization algorithm and an electromagnetic field simulator, and confirmed its effectiveness.

  6. Design and optimization of a nanoprobe comprising amphiphilic chitosan colloids and Au-nanorods: Sensitive detection of human serum albumin in simulated urine

    NASA Astrophysics Data System (ADS)

    Jean, Ren-Der; Larsson, Mikael; Cheng, Wei-Da; Hsu, Yu-Yuan; Bow, Jong-Shing; Liu, Dean-Mo

    2016-12-01

    Metallic nanoparticles have been utilized as analytical tools to detect a wide range of organic analytes. In most reports, gold (Au)-based nanosensors have been modified with ligands to introduce selectivity towards a specific target molecule. However, in a recent study a new concept was presented where bare Au-nanorods on self-assembled carboxymethyl-hexanoyl chitosan (CHC) nanocarriers achieved sensitive and selective detection of human serum albumin (HSA) after manipulation of the solution pH. Here this concept was further advanced through optimization of the ratio between Au-nanorods and CHC nanocarriers to create a nanotechnology-based sensor (termed CHC-AuNR nanoprobe) with an outstanding lower detection limit (LDL) for HSA. The CHC-AuNR nanoprobe was evaluated in simulated urine solution and a LDL as low as 1.5 pM was achieved at an estimated AuNR/CHC ratio of 2. Elemental mapping and protein adsorption kinetics over three orders of magnitude in HSA concentration confirmed accumulation of HSA on the nanorods and revealed the adsorption to be completed within 15 min for all investigated concentrations. The results suggest that the CHC-AuNR nanoprobe has potential to be utilized for cost-effective detection of analytes in complex liquids.

  7. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms ofmore » a suggested framework model based on discrete event simulation.« less

  8. Simulative design and process optimization of the two-stage stretch-blow molding process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less

  9. Simulative design and process optimization of the two-stage stretch-blow molding process

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  10. Shape optimization of road tunnel cross-section by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej; Pachnicz, Michał

    2016-06-01

    The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.

  11. Dimensional optimization of nanowire--complementary metal oxide--semiconductor inverter.

    PubMed

    Hashim, Yasir; Sidek, Othman

    2013-01-01

    This study is the first to demonstrate dimensional optimization of nanowire-complementary metal-oxide-semiconductor inverter. Noise margins and inflection voltage of transfer characteristics are used as limiting factors in this optimization. Results indicate that optimization depends on both dimensions ratio and digital voltage level (Vdd). Diameter optimization reveals that when Vdd increases, the optimized value of (Dp/Dn) decreases. Channel length optimization results show that when Vdd increases, the optimized value of Ln decreases and that of (Lp/Ln) increases. Dimension ratio optimization reveals that when Vdd increases, the optimized value of Kp/Kn decreases, and silicon nanowire transistor with suitable dimensions (higher Dp and Ln with lower Lp and Dn) can be fabricated.

  12. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    NASA Astrophysics Data System (ADS)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.

  13. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  14. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  15. Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation

    PubMed Central

    Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J.; Moore, Kenneth J.; Thorburn, Peter; Archontoulis, Sotirios V.

    2016-01-01

    Improved prediction of optimal N fertilizer rates for corn (Zea mays L.) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean (Glycine max L.) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha-1) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR’s were within the historical N rate error range (40–50 kg N ha-1). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward agronomic, economic, and environmental sustainability. PMID:27891133

  16. Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation.

    PubMed

    Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J; Moore, Kenneth J; Thorburn, Peter; Archontoulis, Sotirios V

    2016-01-01

    Improved prediction of optimal N fertilizer rates for corn ( Zea mays L. ) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean ( Glycine max L. ) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha -1 ) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR's were within the historical N rate error range (40-50 kg N ha -1 ). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward agronomic, economic, and environmental sustainability.

  17. Optimization and Characterization of a Novel Self Powered Solid State Neutron Detector

    NASA Astrophysics Data System (ADS)

    Clinton, Justin

    There is a strong interest in detecting both the diversion of special nuclear material (SNM) from legitimate, peaceful purposes and the transport of illicit SNM across domestic and international borders and ports. A simple solid-state detector employs a planar solar-cell type p-n junction and a thin conversion layer that converts incident neutrons into detectable charged particles, such as protons, alpha-particles, and heavier ions. Although simple planar devices can act as highly portable, low cost detectors, they have historically been limited to relatively low detection efficiencies; ˜10% and ˜0.2% for thermal and fast detectors, respectively. To increase intrinsic detection efficiency, the incorporation of 3D microstructures into p-i-n silicon devices was proposed. In this research, a combination of existing and new types of detector microstructures were investigated; Monte Carlo models, based on analytical calculations, were constructed and characterized using the GEANT4 simulation toolkit. The simulation output revealed that an array of etched hexagonal holes arranged in a honeycomb pattern and filled with either enriched (99% 10B) boron or parylene resulted in the highest intrinsic detection efficiencies of 48% and 0.88% for thermal and fast neutrons, respectively. The optimal parameters corresponding to each model were utilized as the basis for the fabrication of several prototype detectors. A calibrated 252Cf spontaneous fission source was utilized to generate fast neutrons, while thermal neutrons were created by placing the 252Cf in an HDPE housing designed and optimized using the MCNP simulation software. Upon construction, thermal neutron calibration was performed via activation analysis of gold foils and measurements from a 6Li loaded glass scintillator. Experimental testing of the prototype detectors resulted in maximum intrinsic efficiencies of 4.5 and 0.12% for the thermal and fast devices, respectively. The prototype thermal device was filled with natural (19% 10B) boron; scaling the response to 99% 10B enriched boron resulted in an intrinsic efficiency of 22.5%, one of the highest results in the literature. A comparison of simulated and experimental detector responses demonstrated a high degree of correlation, validating the conceptual models.

  18. Molecular and electronic structure of the peptide subunit of Geobacter sulfurreducens conductive pili from first principles.

    PubMed

    Feliciano, Gustavo T; da Silva, Antonio J R; Reguera, Gemma; Artacho, Emilio

    2012-08-02

    The respiration of metal oxides by the bacterium Geobacter sulfurreducens requires the assembly of a small peptide (the GS pilin) into conductive filaments termed pili. We gained insights into the contribution of the GS pilin to the pilus conductivity by developing a homology model and performing molecular dynamics simulations of the pilin peptide in vacuo and in solution. The results were consistent with a predominantly helical peptide containing the conserved α-helix region required for pilin assembly but carrying a short carboxy-terminal random-coiled segment rather than the large globular head of other bacterial pilins. The electronic structure of the pilin was also explored from first principles and revealed a biphasic charge distribution along the pilin and a low electronic HOMO-LUMO gap, even in a wet environment. The low electronic band gap was the result of strong electrostatic fields generated by the alignment of the peptide bond dipoles in the pilin's α-helix and by charges from ions in solution and amino acids in the protein. The electronic structure also revealed some level of orbital delocalization in regions of the pilin containing aromatic amino acids and in spatial regions of high resonance where the HOMO and LUMO states are, which could provide an optimal environment for the hopping of electrons under thermal fluctuations. Hence, the structural and electronic features of the pilin revealed in these studies support the notion of a pilin peptide environment optimized for electron conduction.

  19. Comparison of Low-Thrust Control Laws for Application in Planetocentric Space

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Sjauw, Waldy K.; Smith, David A.

    2014-01-01

    Recent interest at NASA for the application of solar electric propulsion for the transfer of significant payloads in cislunar space has led to the development of high-fidelity simulations of such missions. With such transfers involving transfer times on the order of months, simulation time can be significant. In the past, the examination of such missions typically began with the use of lower-fidelity trajectory optimization tools such as SEPSPOT to develop and tune guidance laws which delivered optimal or near- optimal trajectories, where optimal is generally defined as minimizing propellant expenditure or time of flight. The transfer of these solutions to a high-fidelity simulation is typically an iterative process whereby the initial solution may nearly, but not precisely, meet mission objectives. Further tuning of the guidance algorithm is typically necessary when accounting for high-fidelity perturbations such as those due to more detailed gravity models, secondary-body effects, solar radiation pressure, etc. While trajectory optimization is a useful method for determining optimal performance metrics, algorithms which deliver nearly optimal performance with minimal tuning are an attractive alternative.

  20. Heat of adsorption, adsorption stress, and optimal storage of methane in slit and cylindrical carbon pores predicted by classical density functional theory.

    PubMed

    Hlushak, Stepan

    2018-01-03

    Temperature, pressure and pore-size dependences of the heat of adsorption, adsorption stress, and adsorption capacity of methane in simple models of slit and cylindrical carbon pores are studied using classical density functional theory (CDFT) and grand-canonical Monte-Carlo (MC) simulation. Studied properties depend nontrivially on the bulk pressure and the size of the pores. Heat of adsorption increases with loading, but only for sufficiently narrow pores. While the increase is advantageous for gas storage applications, it is less significant for cylindrical pores than for slits. Adsorption stress and the average adsorbed fluid density show oscillatory dependence on the pore size and increase with bulk pressure. Slit pores exhibit larger amplitude of oscillations of the normal adsorption stress with pore size increase than cylindrical pores. However, the increase of the magnitude of the adsorption stress with bulk pressure increase is more significant for cylindrical than for slit pores. Adsorption stress appears to be negative for a wide range of pore sizes and external conditions. The pore size dependence of the average delivered density of the gas is analyzed and the optimal pore sizes for storage applications are estimated. The optimal width of slit pore appears to be almost independent of storage pressure at room temperature and pressures above 10 bar. Similarly to the case of slit pores, the optimal radius of cylindrical pores does not exhibit much dependence on the storage pressure above 15 bar. Both optimal width and optimal radii of slit and cylindrical pores increase as the temperature decreases. A comparison of the results of CDFT theory and MC simulations reveals subtle but important differences in the underlying fluid models employed by the approaches. The differences in the high-pressure behaviour between the hard-sphere 2-Yukawa and Lennard-Jones models of methane, employed by the CDFT and MC approaches, respectively, result in an overestimation of the heat of adsorption by the CDFT theory at higher loadings. However, both adsorption stress and adsorption capacity appear to be much less sensitive to the differences between the models and demonstrate excellent agreement between the theory and the computer experiment.

  1. A simulation-optimization-based decision support tool for mitigating traffic congestion.

    DOT National Transportation Integrated Search

    2009-12-01

    "Traffic congestion has grown considerably in the United States over the past twenty years. In this paper, we develop : a robust decision support tool based on simulation optimization to evaluate and recommend congestion-mitigation : strategies to tr...

  2. Integrated optics applied to astronomical aperture synthesis III: simulation of components optimized for astronomical interferometry

    NASA Astrophysics Data System (ADS)

    Nabias, Laurent; Schanen, Isabelle; Berger, Jean-Philippe; Kern, Pierre; Malbet, Fabien; Benech, Pierre

    2018-04-01

    This paper, "Integrated optics applied to astronomical aperture synthesis III: simulation of components optimized for astronomical interferometry," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.

  3. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  4. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  5. Optimization of Simplex Atomizer Inlet Port Configuration through Computational Fluid Dynamics and Experimental Study for Aero-Gas Turbine Applications

    NASA Astrophysics Data System (ADS)

    Marudhappan, Raja; Chandrasekhar, Udayagiri; Hemachandra Reddy, Koni

    2017-10-01

    The design of plain orifice simplex atomizer for use in the annular combustion system of 1100 kW turbo shaft engine is optimized. The discrete flow field of jet fuel inside the swirl chamber of the atomizer and up to 1.0 mm downstream of the atomizer exit are simulated using commercial Computational Fluid Dynamics (CFD) software. The Euler-Euler multiphase model is used to solve two sets of momentum equations for liquid and gaseous phases and the volume fraction of each phase is tracked throughout the computational domain. The atomizer design is optimized after performing several 2D axis symmetric analyses with swirl and the optimized inlet port design parameters are used for 3D simulation. The Volume Of Fluid (VOF) multiphase model is used in the simulation. The orifice exit diameter is 0.6 mm. The atomizer is fabricated with the optimized geometric parameters. The performance of the atomizer is tested in the laboratory. The experimental observations are compared with the results obtained from 2D and 3D CFD simulations. The simulated velocity components, pressure field, streamlines and air core dynamics along the atomizer axis are compared to previous research works and found satisfactory. The work has led to a novel approach in the design of pressure swirl atomizer.

  6. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  7. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    PubMed Central

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274

  8. Design and optimization analysis of dual material gate on DG-IMOS

    NASA Astrophysics Data System (ADS)

    Singh, Sarabdeep; Raman, Ashish; Kumar, Naveen

    2017-12-01

    An impact ionization MOSFET (IMOS) is evolved for overcoming the constraint of less than 60 mV/decade sub-threshold slope (SS) of conventional MOSFET at room temperature. In this work, first, the device performance of the p-type double gate impact ionization MOSFET (DG-IMOS) is optimized by adjusting the device design parameters. The adjusted parameters are ratio of gate and intrinsic length, gate dielectric thickness and gate work function. Secondly, the DMG (dual material gate) DG-IMOS is proposed and investigated. This DMG DG-IMOS is further optimized to obtain the best possible performance parameters. Simulation results reveal that DMG DG-IMOS when compared to DG-IMOS, shows better I ON, I ON/I OFF ratio, and RF parameters. Results show that by properly tuning the lengths of two materials at a ratio of 1.5 in DMG DG-IMOS, optimized performance is achieved including I ON/I OFF ratio of 2.87 × 109 A/μm with I ON as 11.87 × 10-4 A/μm and transconductance of 1.06 × 10-3 S/μm. It is analyzed that length of drain side material should be greater than the length of source side material to attain the higher transconductance in DMG DG-IMOS.

  9. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  10. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  11. Optimization design and analysis of the pavement planer scraper structure

    NASA Astrophysics Data System (ADS)

    Fang, Yuanbin; Sha, Hongwei; Yuan, Dajun; Xie, Xiaobing; Yang, Shibo

    2018-03-01

    By LS-DYNA, it establishes the finite element model of road milling machine scraper, and analyses the dynamic simulation. Through the optimization of the scraper structure and scraper angle, obtain the optimal structure of milling machine scraper. At the same time, the simulation results are verified. The results show that the scraper structure is improved that cemented carbide is located in the front part of the scraper substrate. Compared with the working resistance before improvement, it tends to be gentle and the peak value is smaller. The cutting front angle and the cutting back angle are optimized. The cutting front angle is 6 degrees and the cutting back angle is 9 degrees. The resultant of forces which contains the working resistance and the impact force is the least. It proves accuracy of the simulation results and provides guidance for further optimization work.

  12. Optimization analysis of thermal management system for electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  13. A new optimal sliding mode controller design using scalar sign function.

    PubMed

    Singla, Mithun; Shieh, Leang-San; Song, Gangbing; Xie, Linbo; Zhang, Yongpeng

    2014-03-01

    This paper presents a new optimal sliding mode controller using the scalar sign function method. A smooth, continuous-time scalar sign function is used to replace the discontinuous switching function in the design of a sliding mode controller. The proposed sliding mode controller is designed using an optimal Linear Quadratic Regulator (LQR) approach. The sliding surface of the system is designed using stable eigenvectors and the scalar sign function. Controller simulations are compared with another existing optimal sliding mode controller. To test the effectiveness of the proposed controller, the controller is implemented on an aluminum beam with piezoceramic sensor and actuator for vibration control. This paper includes the control design and stability analysis of the new optimal sliding mode controller, followed by simulation and experimental results. The simulation and experimental results show that the proposed approach is very effective. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    NASA Astrophysics Data System (ADS)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  15. WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization

    NASA Astrophysics Data System (ADS)

    Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry

    2018-01-01

    We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.

  16. Is optimal paddle force applied during paediatric external defibrillation?

    PubMed

    Bennetts, Sarah H; Deakin, Charles D; Petley, Graham W; Clewlow, Frank

    2004-01-01

    Optimal paddle force minimises transthoracic impedance; a factor associated with increased defibrillation success. Optimal force for the defibrillation of children < or =10 kg using paediatric paddles has previously been shown to be 2.9 kgf, and for children >10 kg using adult paddles is 5.1 kgf. We compared defibrillation paddle force applied during simulated paediatric defibrillation with these optimal values. 72 medical and nursing staff who would be expected to perform paediatric defibrillation were recruited from a University teaching hospital. Participants, blinded to the nature of the study, were asked to simulate defibrillation of an infant manikin (9 months of age) and a child manikin (6 years of age) using paediatric or adult paddles, respectively, according to guidelines. Paddle force (kgf) was measured at the time of simulated shock and compared with known optimal values. Median paddle force applied to the infant manikin was 2.8 kgf (max 9.6, min 0.6), with only 47% operators attaining optimal force. Median paddle force applied to the child manikin was 3.8 kgf (max 10.2, min 1.0), with only 24% of operators attaining optimal force. Defibrillation paddle force applied during paediatric defibrillation often falls below optimal values.

  17. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  18. Harmony search optimization for HDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Panchal, Aditya

    In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was implemented in this thesis was within 2% of the values computed by Varian BrachyVision for the prostate, within 3% for the rectum and bladder and 6% for the urethra. The calculation of dose compared to BrachyVision was determined to be different by only 0.38%. Isodose curves were also generated and were found to be similar to BrachyVision. The comparison between Harmony Search and genetic algorithm showed that Harmony Search was over 4 times faster when compared over multiple data sets. The optimal Harmony Memory Size was found to be 5 or lower; the Harmony Memory Considering Rate was determined to be 0.95, and the Pitch Adjusting Rate was found to be 0.9. Ultimately, the effect of multithreading showed that as intensive computations such as optimization and dose calculation are involved, the threads of execution scale with the number of processors, achieving a speed increase proportional to the number of processor cores. In conclusion, this work showed that Harmony Search is a viable alternative to existing algorithms for use in HDR prostate brachytherapy optimization. Coupled with the optimal parameters for the algorithm and a multithreaded simulation, this combination has the capability to significantly decrease the time spent on minimizing optimization problems in the clinic that are time intensive, such as brachytherapy, IMRT and beam angle optimization.

  19. Exploring the optimal economic timing for crop tree release treatments in hardwoods: results from simulation

    Treesearch

    Chris B. LeDoux; Gary W. Miller

    2008-01-01

    In this study we used data from 16 Appalachian hardwood stands, a growth and yield computer simulation model, and stump-to-mill logging cost-estimating software to evaluate the optimal economic timing of crop tree release (CTR) treatments. The simulated CTR treatments consisted of one-time logging operations at stand age 11, 23, 31, or 36 years, with the residual...

  20. Design and Analysis of an Axisymmetric Phased Array Fed Gregorian Reflector System for Limited Scanning

    DTIC Science & Technology

    2016-01-22

    Numerical electromagnetic simulations based on the multilevel fast multipole method (MLFMM) were used to analyze and optimize the antenna...and are not necessarily endorsed by the United States Government. numerical simulations with the multilevel fast multipole method (MLFMM...and optimized using numerical simulations conducted with the multilevel fast multipole method (MLFMM) using FEKO software (www.feko.info). The

  1. Wheat forecast economics effect study. [value of improved information on crop inventories, production, imports and exports

    NASA Technical Reports Server (NTRS)

    Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.

    1980-01-01

    A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.

  2. Methods of comparing associative models and an application to retrospective revaluation.

    PubMed

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Variational optimization algorithms for uniform matrix product states

    NASA Astrophysics Data System (ADS)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  4. Mapping the landscape of metabolic goals of a cell

    DOE PAGES

    Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...

    2016-05-23

    Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less

  5. One-to-one comparison of sunscreen efficacy, aesthetics and potential nanotoxicity

    NASA Astrophysics Data System (ADS)

    Barnard, Amanda S.

    2010-04-01

    Numerous reports have described the superior properties of nanoparticles and their diverse range of applications. Issues of toxicity, workplace safety and environmental impact have also been a concern. Here we show a theoretical comparison of how the size of titanium dioxide nanoparticles and their concentration in sunscreens can affect efficacy, aesthetics and potential toxicity from free radical production. The simulation results reveal that, unless very small nanoparticles can be shown to be safe, there is no combination of particle size and concentration that will deliver optimal performance in terms of sun protection and aesthetics. Such a theoretical method complements well the experimental approach for identifying these characteristics.

  6. Parameter estimation for chaotic systems using improved bird swarm algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Chuangbiao; Yang, Renhuan

    2017-12-01

    Parameter estimation of chaotic systems is an important problem in nonlinear science and has aroused increasing interest of many research fields, which can be basically reduced to a multidimensional optimization problem. In this paper, an improved boundary bird swarm algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the good global convergence and robustness of the bird swarm algorithm and the exploitation capability of improved boundary learning strategy. Experiments are conducted on the Lorenz system and the coupling motor system. Numerical simulation results reveal the effectiveness and with desirable performance of IBBSA for parameter estimation of chaotic systems.

  7. The Molecular Structure of a Phosphatidylserine Bilayer Determined by Scattering and Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Jianjun; Cheng, Xiaolin; Monticelli, Luca

    2014-01-01

    Phosphatidylserine (PS) lipids play essential roles in biological processes, including enzyme activation and apoptosis. We report on the molecular structure and atomic scale interactions of a fluid bilayer composed of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylserine (POPS). A scattering density profile model, aided by molecular dynamics (MD) simulations, was developed to jointly refine different contrast small-angle neutron and X-ray scattering data, which yielded a lipid area of 62.7 A2 at 25 C. MD simulations with POPS lipid area constrained at different values were also performed using all-atom and aliphatic united-atom models. The optimal simulated bilayer was obtained using a model-free comparison approach. Examination of themore » simulated bilayer, which agrees best with the experimental scattering data, reveals a preferential interaction between Na+ ions and the terminal serine and phosphate moieties. Long-range inter-lipid interactions were identified, primarily between the positively charged ammonium, and the negatively charged carboxylic and phosphate oxygens. The area compressibility modulus KA of the POPS bilayer was derived by quantifying lipid area as a function of surface tension from area-constrained MD simulations. It was found that POPS bilayers possess a much larger KA than that of neutral phosphatidylcholine lipid bilayers. We propose that the unique molecular features of POPS bilayers may play an important role in certain physiological functions.« less

  8. Optimal Control of Inspired Perfluorocarbon Temperature for Ultrafast Hypothermia Induction by Total Liquid Ventilation in an Adult Patient Model.

    PubMed

    Nadeau, Mathieu; Sage, Michael; Kohlhauer, Matthias; Mousseau, Julien; Vandamme, Jonathan; Fortin-Pellerin, Etienne; Praud, Jean-Paul; Tissier, Renaud; Walti, Herve; Micheau, Philippe

    2017-12-01

    Recent preclinical studies have shown that therapeutic hypothermia induced in less than 30 min by total liquid ventilation (TLV) strongly improves the survival rate after cardiac arrest. When the lung is ventilated with a breathable perfluorocarbon liquid, the inspired perfluorocarbon allows us to control efficiently the cooling process of the organs. While TLV can rapidly cool animals, the cooling speed in humans remains unknown. The objective is to predict the efficiency and safety of ultrafast cooling by TLV in adult humans. It is based on a previously published thermal model of ovines in TLV and the design of a direct optimal controller to compute the inspired perfluorocarbon temperature profile. The experimental results in an adult sheep are presented. The thermal model of sheep is subsequently projected to a human model to simulate the optimal hypothermia induction and its sensitivity to physiological parameter uncertainties. The results in the sheep showed that the computed inspired perfluorocarbon temperature command can avoid arterial temperature undershoot. The projection to humans revealed that mild hypothermia should be ultrafast (reached in fewer than 3 min (-72 °C/h) for the brain and 20 min (-10 °C/h) for the entire body). The projection to human model allows concluding that therapeutic hypothermia induction by TLV can be ultrafast and safe. This study is the first to simulate ultrafast cooling by TLV in a human model and is a strong motivation to translate TLV to humans to improve the quality of life of postcardiac arrest patients.

  9. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  10. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  11. Non-linear auto-regressive models for cross-frequency coupling in neural time series

    PubMed Central

    Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre

    2017-01-01

    We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989

  12. A computer simulator for development of engineering system design methodologies

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Sobieszczanski-Sobieski, J.

    1987-01-01

    A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.

  13. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  14. Parameter Optimization and Electrode Improvement of Rotary Stepper Micromotor

    NASA Astrophysics Data System (ADS)

    Sone, Junji; Mizuma, Toshinari; Mochizuki, Shunsuke; Sarajlic, Edin; Yamahata, Christophe; Fujita, Hiroyuki

    We developed a three-phase electrostatic stepper micromotor and performed a numerical simulation to improve its performance for practical use and to optimize its design. We conducted its circuit simulation by simplifying its structure, and the effect of springback force generated by supported mechanism using flexures was considered. And we considered new improvement method for electrodes. This improvement and other parameter optimizations achieved the low voltage drive of micromotor.

  15. Proposed correlation of structure network inherited from producing techniques and deformation behavior for Ni-Ti-Mo metallic glasses via atomistic simulations

    PubMed Central

    Yang, M. H.; Li, J. H.; Liu, B. X.

    2016-01-01

    Based on the newly constructed n-body potential of Ni-Ti-Mo system, Molecular Dynamics and Monte Carlo simulations predict an energetically favored glass formation region and an optimal composition sub-region with the highest glass-forming ability. In order to compare the producing techniques between liquid melt quenching (LMQ) and solid-state amorphization (SSA), inherent hierarchical structure and its effect on mechanical property were clarified via atomistic simulations. It is revealed that both producing techniques exhibit no pronounced differences in the local atomic structure and mechanical behavior, while the LMQ method makes a relatively more ordered structure and a higher intrinsic strength. Meanwhile, it is found that the dominant short-order clusters of Ni-Ti-Mo metallic glasses obtained by LMQ and SSA are similar. By analyzing the structural evolution upon uniaxial tensile deformation, it is concluded that the gradual collapse of the spatial structure network is intimately correlated to the mechanical response of metallic glasses and acts as a structural signature of the initiation and propagation of shear bands. PMID:27418115

  16. Pharmacodynamically optimized erythropoietin treatment combined with phlebotomy reduction predicted to eliminate blood transfusions in selected preterm infants.

    PubMed

    Rosebraugh, Matthew R; Widness, John A; Nalbant, Demet; Cress, Gretchen; Veng-Pedersen, Peter

    2014-02-01

    Preterm very-low-birth-weight (VLBW) infants weighing <1.5 kg at birth develop anemia, often requiring multiple red blood cell transfusions (RBCTx). Because laboratory blood loss is a primary cause of anemia leading to RBCTx in VLBW infants, our purpose was to simulate the extent to which RBCTx can be reduced or eliminated by reducing laboratory blood loss in combination with pharmacodynamically optimized erythropoietin (Epo) treatment. Twenty-six VLBW ventilated infants receiving RBCTx were studied during the first month of life. RBCTx simulations were based on previously published RBCTx criteria and data-driven Epo pharmacodynamic optimization of literature-derived RBC life span and blood volume data corrected for phlebotomy loss. Simulated pharmacodynamic optimization of Epo administration and reduction in phlebotomy by ≥ 55% predicted a complete elimination of RBCTx in 1.0-1.5 kg infants. In infants <1.0 kg with 100% reduction in simulated phlebotomy and optimized Epo administration, a 45% reduction in RBCTx was predicted. The mean blood volume drawn from all infants was 63 ml/kg: 33% required for analysis and 67% discarded. When reduced laboratory blood loss and optimized Epo treatment are combined, marked reductions in RBCTx in ventilated VLBW infants were predicted, particularly among those with birth weights >1.0 kg.

  17. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  18. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  19. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  20. Global Simulation of Aviation Operations

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Sheth, Kapil; Ng, Hok Kwan; Morando, Alex; Li, Jinhua

    2016-01-01

    The simulation and analysis of global air traffic is limited due to a lack of simulation tools and the difficulty in accessing data sources. This paper provides a global simulation of aviation operations combining flight plans and real air traffic data with historical commercial city-pair aircraft type and schedule data and global atmospheric data. The resulting capability extends the simulation and optimization functions of NASA's Future Air Traffic Management Concept Evaluation Tool (FACET) to global scale. This new capability is used to present results on the evolution of global air traffic patterns from a concentration of traffic inside US, Europe and across the Atlantic Ocean to a more diverse traffic pattern across the globe with accelerated growth in Asia, Australia, Africa and South America. The simulation analyzes seasonal variation in the long-haul wind-optimal traffic patterns in six major regions of the world and provides potential time-savings of wind-optimal routes compared with either great circle routes or current flight-plans if available.

  1. Towards an optimal flow: Density-of-states-informed replica-exchange simulations

    DOE PAGES

    Vogel, Thomas; Perez, Danny

    2015-11-05

    Here we learn that replica exchange (RE) is one of the most popular enhanced-sampling simulations technique in use today. Despite widespread successes, RE simulations can sometimes fail to converge in practical amounts of time, e.g., when sampling around phase transitions, or when a few hard-to-find configurations dominate the statistical averages. We introduce a generalized RE scheme, density-of-states-informed RE, that addresses some of these challenges. The key feature of our approach is to inform the simulation with readily available, but commonly unused, information on the density of states of the system as the RE simulation proceeds. This enables two improvements, namely,more » the introduction of resampling moves that actively move the system towards equilibrium and the continual adaptation of the optimal temperature set. As a consequence of these two innovations, we show that the configuration flow in temperature space is optimized and that the overall convergence of RE simulations can be dramatically accelerated.« less

  2. In-flight performance optimization for rotorcraft with redundant controls

    NASA Astrophysics Data System (ADS)

    Ozdemir, Gurbuz Taha

    A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to establish a schedule. The method has been expanded to search a two-dimensional control space. Simulation results demonstrate the ability to maximize range by optimizing stabilator deflection and an airspeed set point. Another set of results minimize power required in high speed flight by optimizing collective pitch and stabilator deflection. Results show that the control laws effectively hold the flight condition while the FTO method is effective at improving performance. Optimizations show there can be issues when the control laws regulating altitude push the collective control towards it limits. So a modification was made to the control law to regulate airspeed and altitude using propeller pitch and angle of attack while the collective is held fixed or used as an optimization variable. A dynamic trim limit avoidance algorithm is applied to avoid control saturation in other axes during optimization maneuvers. Range and power optimization FTO simulations are compared with comprehensive sweeps of trim solutions and FTO optimization shown to be effective and reliable in reaching an optimal when optimizing up to two redundant controls. Use of redundant controls is shown to be beneficial for improving performance. The search method takes almost 25 minutes of simulated flight for optimization to be complete. The optimization maneuver itself can sometimes drive the power required to high values, so a power limit is imposed to restrict the search to avoid conditions where power is more than5% higher than that of the initial trim state. With this modification, the time the optimization maneuver takes to complete is reduced down to 21 minutes without any significant change in the optimal power value.

  3. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  4. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  5. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    PubMed Central

    Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen

    2018-01-01

    The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635

  6. Subthreshold SPICE Model Optimization

    NASA Astrophysics Data System (ADS)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  7. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  8. Wet cooling towers: rule-of-thumb design and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leeper, Stephen A.

    1981-07-01

    A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less

  9. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGES

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  10. Three-dimensional data-tracking dynamic optimization simulations of human locomotion generated by direct collocation.

    PubMed

    Lin, Yi-Chung; Pandy, Marcus G

    2017-07-05

    The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Sensitivity Analysis and Optimization of Enclosure Radiation with Applications to Crystal Growth

    NASA Technical Reports Server (NTRS)

    Tiller, Michael M.

    1995-01-01

    In engineering, simulation software is often used as a convenient means for carrying out experiments to evaluate physical systems. The benefit of using simulations as 'numerical' experiments is that the experimental conditions can be easily modified and repeated at much lower cost than the comparable physical experiment. The goal of these experiments is to 'improve' the process or result of the experiment. In most cases, the computational experiments employ the same trial and error approach as their physical counterparts. When using this approach for complex systems, the cause and effect relationship of the system may never be fully understood and efficient strategies for improvement never utilized. However, it is possible when running simulations to accurately and efficiently determine the sensitivity of the system results with respect to simulation to accurately and efficiently determine the sensitivity of the system results with respect to simulation parameters (e.g., initial conditions, boundary conditions, and material properties) by manipulating the underlying computations. This results in a better understanding of the system dynamics and gives us efficient means to improve processing conditions. We begin by discussing the steps involved in performing simulations. Then we consider how sensitivity information about simulation results can be obtained and ways this information may be used to improve the process or result of the experiment. Next, we discuss optimization and the efficient algorithms which use sensitivity information. We draw on all this information to propose a generalized approach for integrating simulation and optimization, with an emphasis on software programming issues. After discussing our approach to simulation and optimization we consider an application involving crystal growth. This application is interesting because it includes radiative heat transfer. We discuss the computation of radiative new factors and the impact this mode of heat transfer has on our approach. Finally, we will demonstrate the results of our optimization.

  12. Light extraction efficiency analysis of GaN-based light-emitting diodes with nanopatterned sapphire substrates.

    PubMed

    Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan

    2013-03-01

    In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.

  13. Numerical simulation and optimization of casting process for complex pump

    NASA Astrophysics Data System (ADS)

    Liu, Xueqin; Dong, Anping; Wang, Donghong; Lu, Yanling; Zhu, Guoliang

    2017-09-01

    The complex shape of the casting pump body has large complicated structure and uniform wall thickness, which easy give rise to casting defects. The numerical simulation software ProCAST is used to simulate the initial top gating process, after analysis of the material and structure characteristics of the high-pressure pump. The filling process was overall smooth, not there the water shortage phenomenon. But the circular shrinkage defects appear at the bottom of casting during solidification process. Then, the casting parameters were optimized and adding cold iron in the bottom. The shrinkage weight was reduced from 0.00167g to 0.0005g. The porosity volume was reduced from 1.39cm3 to 0.41cm3. The optimization scheme is simulated and actual experimented. The defect has been significantly improved.

  14. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  15. Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2003-01-01

    Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.

  16. Geometrical Effect on Thermal Conductivity of Unidirectional Fiber-Reinforced Polymer Composite along Different In-plane Orientations

    NASA Astrophysics Data System (ADS)

    Fang, Zenong; Li, Min; Wang, Shaokai; Li, Yanxia; Wang, Xiaolei; Gu, Yizhuo; Liu, Qianli; Tian, Jie; Zhang, Zuoguang

    2017-11-01

    This paper focuses on the anisotropic characteristics of the in-plane thermal conductivity of fiber-reinforced polymer composite based on experiment and simulation. Thermal conductivity along different in-plane orientations was measured by laser flash analysis (LFA) and steady-state heat flow method. Their heat transfer processes were simulated to reveal the geometrical effect on thermal conduction. The results show that the in-plane thermal conduction of unidirectional carbon-fiber-reinforced polymer composite is greatly influenced by the sample geometry at an in-plane orientation angle between 0° to 90°. By defining radius-to-thickness as a dimensionless shape factor for the LFA sample, the apparent thermal conductivity shows a dramatic change when the shape factor is close to the tangent of the orientation angle (tanθ). Based on finite element analysis, this phenomenon was revealed to correlate with the change of the heat transfer process. When the shape factor is larger than tanθ, the apparent thermal conductivity is consistent with the estimated value according to the theoretical model. For a sample with a shape factor smaller than tanθ, the apparent thermal conductivity shows a slow growth around a low value, which seriously deviates from the theory estimation. This phenomenon was revealed to correlate with the change of the heat transfer process from a continuous path to a zigzag path. These results will be helpful in optimizing the ply scheme of composite laminates for thermal management applications.

  17. Simulation-based planning for theater air warfare

    NASA Astrophysics Data System (ADS)

    Popken, Douglas A.; Cox, Louis A., Jr.

    2004-08-01

    Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.

  18. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  19. Microfiltration of thin stillage: Process simulation and economic analyses

    USDA-ARS?s Scientific Manuscript database

    In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...

  20. Swarm intelligence-based approach for optimal design of CMOS differential amplifier and comparator circuit using a hybrid salp swarm algorithm

    NASA Astrophysics Data System (ADS)

    Asaithambi, Sasikumar; Rajappa, Muthaiah

    2018-05-01

    In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.

  1. Swarm intelligence-based approach for optimal design of CMOS differential amplifier and comparator circuit using a hybrid salp swarm algorithm.

    PubMed

    Asaithambi, Sasikumar; Rajappa, Muthaiah

    2018-05-01

    In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.

  2. A trajectory planning scheme for spacecraft in the space station environment. M.S. Thesis - University of California

    NASA Technical Reports Server (NTRS)

    Soller, Jeffrey Alan; Grunwald, Arthur J.; Ellis, Stephen R.

    1991-01-01

    Simulated annealing is used to solve a minimum fuel trajectory problem in the space station environment. The environment is special because the space station will define a multivehicle environment in space. The optimization surface is a complex nonlinear function of the initial conditions of the chase and target crafts. Small permutations in the input conditions can result in abrupt changes to the optimization surface. Since no prior knowledge about the number or location of local minima on the surface is available, the optimization must be capable of functioning on a multimodal surface. It was reported in the literature that the simulated annealing algorithm is more effective on such surfaces than descent techniques using random starting points. The simulated annealing optimization was found to be capable of identifying a minimum fuel, two-burn trajectory subject to four constraints which are integrated into the optimization using a barrier method. The computations required to solve the optimization are fast enough that missions could be planned on board the space station. Potential applications for on board planning of missions are numerous. Future research topics may include optimal planning of multi-waypoint maneuvers using a knowledge base to guide the optimization, and a study aimed at developing robust annealing schedules for potential on board missions.

  3. Launch Vehicle Ascent Trajectory Simulation Using the Program to Optimize Simulated Trajectories II (POST2)

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Shidner, Jeremy D.; Powell, Richard W.; Marsh, Steven M.; Hoffman, James A.; Litton, Daniel K.; Schmitt, Terri L.

    2017-01-01

    The Program to Optimize Simulated Trajectories II (POST2) has been continuously developed for over 40 years and has been used in many flight and research projects. Recently, there has been an effort to improve the POST2 architecture by promoting modularity, flexibility, and ability to support multiple simultaneous projects. The purpose of this paper is to provide insight into the development of trajectory simulation in POST2 by describing methods and examples of various improved models for a launch vehicle liftoff and ascent.

  4. Simulation in production of open rotor propellers: from optimal surface geometry to automated control of mechanical treatment

    NASA Astrophysics Data System (ADS)

    Grinyok, A.; Boychuk, I.; Perelygin, D.; Dantsevich, I.

    2018-03-01

    A complex method of the simulation and production design of open rotor propellers was studied. An end-to-end diagram was proposed for the evaluating, designing and experimental testing the optimal geometry of the propeller surface, for the machine control path generation as well as for simulating the cutting zone force condition and its relationship with the treatment accuracy which was defined by the propeller elastic deformation. The simulation data provided the realization of the combined automated path control of the cutting tool.

  5. Optimization of HAART with genetic algorithms and agent-based models of HIV infection.

    PubMed

    Castiglione, F; Pappalardo, F; Bernaschi, M; Motta, S

    2007-12-15

    Highly Active AntiRetroviral Therapies (HAART) can prolong life significantly to people infected by HIV since, although unable to eradicate the virus, they are quite effective in maintaining control of the infection. However, since HAART have several undesirable side effects, it is considered useful to suspend the therapy according to a suitable schedule of Structured Therapeutic Interruptions (STI). In the present article we describe an application of genetic algorithms (GA) aimed at finding the optimal schedule for a HAART simulated with an agent-based model (ABM) of the immune system that reproduces the most significant features of the response of an organism to the HIV-1 infection. The genetic algorithm helps in finding an optimal therapeutic schedule that maximizes immune restoration, minimizes the viral count and, through appropriate interruptions of the therapy, minimizes the dose of drug administered to the simulated patient. To validate the efficacy of the therapy that the genetic algorithm indicates as optimal, we ran simulations of opportunistic diseases and found that the selected therapy shows the best survival curve among the different simulated control groups. A version of the C-ImmSim simulator is available at http://www.iac.cnr.it/~filippo/c-ImmSim.html

  6. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  7. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  8. A novel method for energy harvesting simulation based on scenario generation

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min

    2018-06-01

    Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.

  9. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  10. An NMR-Guided Screening Method for Selective Fragment Docking and Synthesis of a Warhead Inhibitor.

    PubMed

    Khattri, Ram B; Morris, Daniel L; Davis, Caroline M; Bilinovich, Stephanie M; Caras, Andrew J; Panzner, Matthew J; Debord, Michael A; Leeper, Thomas C

    2016-07-16

    Selective hits for the glutaredoxin ortholog of Brucella melitensis are determined using STD NMR and verified by trNOE and (15)N-HSQC titration. The most promising hit, RK207, was docked into the target molecule using a scoring function to compare simulated poses to experimental data. After elucidating possible poses, the hit was further optimized into the lead compound by extension with an electrophilic acrylamide warhead. We believe that focusing on selectivity in this early stage of drug discovery will limit cross-reactivity that might occur with the human ortholog as the lead compound is optimized. Kinetics studies revealed that lead compound 5 modified with an ester group results in higher reactivity than an acrylamide control; however, after modification this compound shows little selectivity for bacterial protein versus the human ortholog. In contrast, hydrolysis of compound 5 to the acid form results in a decrease in the activity of the compound. Together these results suggest that more optimization is warranted for this simple chemical scaffold, and opens the door for discovery of drugs targeted against glutaredoxin proteins-a heretofore untapped reservoir for antibiotic agents.

  11. Optimizing Multiple QoS for Workflow Applications using PSO and Min-Max Strategy

    NASA Astrophysics Data System (ADS)

    Umar Ambursa, Faruku; Latip, Rohaya; Abdullah, Azizol; Subramaniam, Shamala

    2017-08-01

    Workflow scheduling under multiple QoS constraints is a complicated optimization problem. Metaheuristic techniques are excellent approaches used in dealing with such problem. Many metaheuristic based algorithms have been proposed, that considers various economic and trustworthy QoS dimensions. However, most of these approaches lead to high violation of user-defined QoS requirements in tight situation. Recently, a new Particle Swarm Optimization (PSO)-based QoS-aware workflow scheduling strategy (LAPSO) is proposed to improve performance in such situations. LAPSO algorithm is designed based on synergy between a violation handling method and a hybrid of PSO and min-max heuristic. Simulation results showed a great potential of LAPSO algorithm to handling user requirements even in tight situations. In this paper, the performance of the algorithm is anlysed further. Specifically, the impact of the min-max strategy on the performance of the algorithm is revealed. This is achieved by removing the violation handling from the operation of the algorithm. The results show that LAPSO based on only the min-max method still outperforms the benchmark, even though the LAPSO with the violation handling performs more significantly better.

  12. Localization of multilayer networks by optimized single-layer rewiring.

    PubMed

    Jalan, Sarika; Pradhan, Priodyuti

    2018-04-01

    We study localization properties of principal eigenvectors (PEVs) of multilayer networks (MNs). Starting with a multilayer network corresponding to a delocalized PEV, we rewire the network edges using an optimization technique such that the PEV of the rewired multilayer network becomes more localized. The framework allows us to scrutinize structural and spectral properties of the networks at various localization points during the rewiring process. We show that rewiring only one layer is enough to attain a MN having a highly localized PEV. Our investigation reveals that a single edge rewiring of the optimized MN can lead to the complete delocalization of a highly localized PEV. This sensitivity in the localization behavior of PEVs is accompanied with the second largest eigenvalue lying very close to the largest one. This observation opens an avenue to gain a deeper insight into the origin of PEV localization of networks. Furthermore, analysis of multilayer networks constructed using real-world social and biological data shows that the localization properties of these real-world multilayer networks are in good agreement with the simulation results for the model multilayer network. This paper is relevant to applications that require understanding propagation of perturbation in multilayer networks.

  13. Optimization of Polyplex Formation between DNA Oligonucleotide and Poly(ʟ-Lysine): Experimental Study and Modeling Approach.

    PubMed

    Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia

    2017-06-17

    The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(ʟ-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4.

  14. Optimization of Polyplex Formation between DNA Oligonucleotide and Poly(l-Lysine): Experimental Study and Modeling Approach

    PubMed Central

    Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia

    2017-01-01

    The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(l-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4. PMID:28629130

  15. Localization of multilayer networks by optimized single-layer rewiring

    NASA Astrophysics Data System (ADS)

    Jalan, Sarika; Pradhan, Priodyuti

    2018-04-01

    We study localization properties of principal eigenvectors (PEVs) of multilayer networks (MNs). Starting with a multilayer network corresponding to a delocalized PEV, we rewire the network edges using an optimization technique such that the PEV of the rewired multilayer network becomes more localized. The framework allows us to scrutinize structural and spectral properties of the networks at various localization points during the rewiring process. We show that rewiring only one layer is enough to attain a MN having a highly localized PEV. Our investigation reveals that a single edge rewiring of the optimized MN can lead to the complete delocalization of a highly localized PEV. This sensitivity in the localization behavior of PEVs is accompanied with the second largest eigenvalue lying very close to the largest one. This observation opens an avenue to gain a deeper insight into the origin of PEV localization of networks. Furthermore, analysis of multilayer networks constructed using real-world social and biological data shows that the localization properties of these real-world multilayer networks are in good agreement with the simulation results for the model multilayer network. This paper is relevant to applications that require understanding propagation of perturbation in multilayer networks.

  16. Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth

    2012-01-01

    The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less

  17. Optimizing STEM Education with Advanced ICTs and Simulations

    ERIC Educational Resources Information Center

    Levin, Ilya, Ed.; Tsybulsky, Dina, Ed.

    2017-01-01

    The role of technology in educational settings has become increasingly prominent in recent years. When utilized effectively, these tools provide a higher quality of learning for students. "Optimizing STEM Education With Advanced ICTs and Simulations" is an innovative reference source for the latest scholarly research on the integration…

  18. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  19. Two-phase simulation-based location-allocation optimization of biomass storage distribution

    USDA-ARS?s Scientific Manuscript database

    This study presents a two-phase simulation-based framework for finding the optimal locations of biomass storage facilities that is a very critical link on the biomass supply chain, which can help to solve biorefinery concerns (e.g. steady supply, uniform feedstock properties, stable feedstock costs,...

  20. Mechanistic and structural basis of bioengineered bovine Cathelicidin-5 with optimized therapeutic activity

    NASA Astrophysics Data System (ADS)

    Sahoo, Bikash R.; Maruyama, Kenta; Edula, Jyotheeswara R.; Tougan, Takahiro; Lin, Yuxi; Lee, Young-Ho; Horii, Toshihiro; Fujiwara, Toshimichi

    2017-03-01

    Peptide-drug discovery using host-defense peptides becomes promising against antibiotic-resistant pathogens and cancer cells. Here, we customized the therapeutic activity of bovine cathelicidin-5 targeting to bacteria, protozoa, and tumor cells. The membrane dependent conformational adaptability and plasticity of cathelicidin-5 is revealed by biophysical analysis and atomistic simulations over 200 μs in thymocytes, leukemia, and E. coli cell-membranes. Our understanding of energy-dependent cathelicidin-5 intrusion in heterogeneous membranes aided in designing novel loss/gain-of-function analogues. In vitro findings identified leucine-zipper to phenylalanine substitution in cathelicidin-5 (1-18) significantly enhance the antimicrobial and anticancer activity with trivial hemolytic activity. Targeted mutants of cathelicidin-5 at kink region and N-terminal truncation revealed loss-of-function. We ensured the existence of a bimodal mechanism of peptide action (membranolytic and non-membranolytic) in vitro. The melanoma mouse model in vivo study further supports the in vitro findings. This is the first structural report on cathelicidin-5 and our findings revealed potent therapeutic application of designed cathelicidin-5 analogues.

  1. Hierarchical optimization for neutron scattering problems

    DOE PAGES

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...

    2016-03-14

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  2. Hierarchical optimization for neutron scattering problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  3. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  4. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  5. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  6. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  7. Deformation effect simulation and optimization for double front axle steering mechanism

    NASA Astrophysics Data System (ADS)

    Wu, Jungang; Zhang, Siqin; Yang, Qinglong

    2013-03-01

    This paper research on tire wear problem of heavy vehicles with Double Front Axle Steering Mechanism from the flexible effect of Steering Mechanism, and proposes a structural optimization method which use both traditional static structural theory and dynamic structure theory - Equivalent Static Load (ESL) method to optimize key parts. The good simulated and test results show this method has high engineering practice and reference value for tire wear problem of Double Front Axle Steering Mechanism design.

  8. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    PubMed

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  9. Primary gamma ray selection in a hybrid timing/imaging Cherenkov array

    NASA Astrophysics Data System (ADS)

    Postnikov, E. B.; Grinyuk, A. A.; Kuzmichev, L. A.; Sveshnikova, L. G.

    2017-06-01

    This work is a methodical study on hybrid reconstruction techniques for hybrid imaging/timing Cherenkov observations. This type of hybrid array is to be realized at the gamma-observatory TAIGA intended for very high energy gamma-ray astronomy (> 30 TeV). It aims at combining the cost-effective timing-array technique with imaging telescopes. Hybrid operation of both of these techniques can lead to a relatively cheap way of development of a large area array. The joint approach of gamma event selection was investigated on both types of simulated data: the image parameters from the telescopes, and the shower parameters reconstructed from the timing array. The optimal set of imaging parameters and shower parameters to be combined is revealed. The cosmic ray background suppression factor depending on distance and energy is calculated. The optimal selection technique leads to cosmic ray background suppression of about 2 orders of magnitude on distances up to 450 m for energies greater than 50 TeV.

  10. Molecular Level Design Principle behind Optimal Sizes of Photosynthetic LH2 Complex: Taming Disorder through Cooperation of Hydrogen Bonding and Quantum Delocalization.

    PubMed

    Jang, Seogjoo; Rivera, Eva; Montemayor, Daniel

    2015-03-19

    The light harvesting 2 (LH2) antenna complex from purple photosynthetic bacteria is an efficient natural excitation energy carrier with well-known symmetric structure, but the molecular level design principle governing its structure-function relationship is unknown. Our all-atomistic simulations of nonnatural analogues of LH2 as well as those of a natural LH2 suggest that nonnatural sizes of LH2-like complexes could be built. However, stable and consistent hydrogen bonding (HB) between bacteriochlorophyll and the protein is shown to be possible only near naturally occurring sizes, leading to significantly smaller disorder than for nonnatural ones. Extensive quantum calculations of intercomplex exciton transfer dynamics, sampled for a large set of disorder, reveal that taming the negative effect of disorder through a reliable HB as well as quantum delocalization of the exciton is a critical mechanism that makes LH2 highly functional, which also explains why the natural sizes of LH2 are indeed optimal.

  11. Human Locomotion under Reduced Gravity Conditions: Biomechanical and Neurophysiological Considerations

    PubMed Central

    Sylos-Labini, Francesca; Ivanenko, Yuri P.

    2014-01-01

    Reduced gravity offers unique opportunities to study motor behavior. This paper aims at providing a review on current issues of the known tools and techniques used for hypogravity simulation and their effects on human locomotion. Walking and running rely on the limb oscillatory mechanics, and one way to change its dynamic properties is to modify the level of gravity. Gravity has a strong effect on the optimal rate of limb oscillations, optimal walking speed, and muscle activity patterns, and gait transitions occur smoothly and at slower speeds at lower gravity levels. Altered center of mass movements and interplay between stance and swing leg dynamics may challenge new forms of locomotion in a heterogravity environment. Furthermore, observations in the lack of gravity effects help to reveal the intrinsic properties of locomotor pattern generators and make evident facilitation of nonvoluntary limb stepping. In view of that, space neurosciences research has participated in the development of new technologies that can be used as an effective tool for gait rehabilitation. PMID:25247179

  12. Optimal frequency-response sensitivity of compressible flow over roughness elements

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.

    2017-04-01

    Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.

  13. Tailoring nanoparticle designs to target cancer based on tumor pathophysiology

    PubMed Central

    Sykes, Edward A.; Dai, Qin; Sarsons, Christopher D.; Chen, Juan; Rocheleau, Jonathan V.; Hwang, David M.; Zheng, Gang; Cramb, David T.; Rinker, Kristina D.; Chan, Warren C. W.

    2016-01-01

    Nanoparticles can provide significant improvements in the diagnosis and treatment of cancer. How nanoparticle size, shape, and surface chemistry can affect their accumulation, retention, and penetration in tumors remains heavily investigated, because such findings provide guiding principles for engineering optimal nanosystems for tumor targeting. Currently, the experimental focus has been on particle design and not the biological system. Here, we varied tumor volume to determine whether cancer pathophysiology can influence tumor accumulation and penetration of different sized nanoparticles. Monte Carlo simulations were also used to model the process of nanoparticle accumulation. We discovered that changes in pathophysiology associated with tumor volume can selectively change tumor uptake of nanoparticles of varying size. We further determine that nanoparticle retention within tumors depends on the frequency of interaction of particles with the perivascular extracellular matrix for smaller nanoparticles, whereas transport of larger nanomaterials is dominated by Brownian motion. These results reveal that nanoparticles can potentially be personalized according to a patient’s disease state to achieve optimal diagnostic and therapeutic outcomes. PMID:26884153

  14. Optimizing the field distribution of a Halbach type permanent magnet cylinder using the soft iron and superhard magnet

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2018-01-01

    When a conventional Halbach type Hollow Cylindrical Permanent Magnet Array (HCPMA) is used to generate magnetic induction over the magnitude of coercivity μ0Hc, some detrimental parasitic magnetic phenomena, such as the demagnetization, magnetization reversal, and vortexes of magnetization, can appear in the interior of the magnets. We present a self-consistent quantitative analysis of the magnetization and magnetic induction distributions inside the magnetic array by considering the anisotropic and nonlinear magnetization functions of the materials consisting of the array. These numeric simulations reveal novel magnetization structures resulted from the self-field of array. We demonstrate that both the field uniformity and magnetic flux in the pole gap can be modulated by partially substituting the magnets of high energy products with the soft irons and the superhard magnets. We also show how the optimized substitution parameters can be obtained for a HCPMA achieving the best field uniformity or the maximum magnetic flux.

  15. Analysis of electric field distribution in GaAs metal-semiconductor field effect transistor with a field-modulating plate

    NASA Astrophysics Data System (ADS)

    Hori, Yasuko; Kuzuhara, Masaaki; Ando, Yuji; Mizuta, Masashi

    2000-04-01

    Electric field distribution in the channel of a field effect transistor (FET) with a field-modulating plate (FP) has been theoretically investigated using a two-dimensional ensemble Monte Carlo simulation. This analysis revealed that the introduction of FP is effective in canceling the influence of surface traps under forward bias conditions and in reducing the electric field intensity at the drain side of the gate edge under pinch-off bias conditions. This study also found that a partial overlap of the high-field region under the gate and that at the FP electrode is important for reducing the electric field intensity. The optimized metal-semiconductor FET with FP (FPFET) (LGF˜0.2 μm) exhibited a much lower peak electric field intensity than a conventional metal-semiconductor FET. Based on these numerically calculated results, we have proposed a design procedure to optimize the power FPFET structure with extremely high breakdown voltages while maintaining reasonable gain performance.

  16. Adaptive frozen orbital treatment for the fragment molecular orbital method combined with density-functional tight-binding

    NASA Astrophysics Data System (ADS)

    Nishimoto, Yoshio; Fedorov, Dmitri G.

    2018-02-01

    The exactly analytic gradient is derived and implemented for the fragment molecular orbital (FMO) method combined with density-functional tight-binding (DFTB) using adaptive frozen orbitals. The response contributions which arise from freezing detached molecular orbitals on the border between fragments are computed by solving Z-vector equations. The accuracy of the energy, its gradient, and optimized structures is verified on a set of representative inorganic materials and polypeptides. FMO-DFTB is applied to optimize the structure of a silicon nano-wire, and the results are compared to those of density functional theory and experiment. FMO accelerates the DFTB calculation of a boron nitride nano-ring with 7872 atoms by a factor of 406. Molecular dynamics simulations using FMO-DFTB applied to a 10.7 μm chain of boron nitride nano-rings, consisting of about 1.2 × 106 atoms, reveal the rippling and twisting of nano-rings at room temperature.

  17. Noise assisted pattern fabrication

    NASA Astrophysics Data System (ADS)

    Roy, Tanushree; Agarwal, V.; Singh, B. P.; Parmananda, P.

    2018-04-01

    Pre-selected patterns on an n-type Si surface are fabricated by electrochemical etching in the presence of a weak optical signal. The constructive role of noise, namely, stochastic resonance (SR), is exploited for these purposes. SR is a nonlinear phenomenon wherein at an optimal amplitude of noise, the information transfer from weak input sub-threshold signals to the system output is maximal. In the present work, the amplitude of internal noise was systematically regulated by varying the molar concentration of hydrofluoric acid (HF) in the electrolyte. Pattern formation on the substrate for two different amplitudes (25 ± 2 and 11 ± 1 mW) of the optical template (sub-threshold signal) was considered. To quantify the fidelity/quality of pattern formation, the spatial cross-correlation coefficient (CCC) between the constructed pattern and the template of the applied signal was calculated. The maximum CCC is obtained for the pattern formed at an optimal HF concentration, indicating SR. Simulations, albeit using external noise, on a spatial array of coupled FitzHugh-Nagumo oscillators revealed similar results.

  18. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

    PubMed

    Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

    2016-12-01

    Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

  19. A Robust Method to Integrate End-to-End Mission Architecture Optimization Tools

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael; Litton, Daniel; Qu, Min; Shidner, Jeremy; Powell, Richard

    2016-01-01

    End-to-end mission simulations include multiple phases of flight. For example, an end-to-end Mars mission simulation may include launch from Earth, interplanetary transit to Mars and entry, descent and landing. Each phase of flight is optimized to meet specified constraints and often depend on and impact subsequent phases. The design and optimization tools and methodologies used to combine different aspects of end-to-end framework and their impact on mission planning are presented. This work focuses on a robust implementation of a Multidisciplinary Design Analysis and Optimization (MDAO) method that offers the flexibility to quickly adapt to changing mission design requirements. Different simulations tailored to the liftoff, ascent, and atmospheric entry phases of a trajectory are integrated and optimized in the MDAO program Isight, which provides the user a graphical interface to link simulation inputs and outputs. This approach provides many advantages to mission planners, as it is easily adapted to different mission scenarios and can improve the understanding of the integrated system performance within a particular mission configuration. A Mars direct entry mission using the Space Launch System (SLS) is presented as a generic end-to-end case study. For the given launch period, the SLS launch performance is traded for improved orbit geometry alignment, resulting in an optimized a net payload that is comparable to that in the SLS Mission Planner's Guide.

  20. Bidirectional optimization of the melting spinning process.

    PubMed

    Liang, Xiao; Ding, Yongsheng; Wang, Zidong; Hao, Kuangrong; Hone, Kate; Wang, Huaping

    2014-02-01

    A bidirectional optimizing approach for the melting spinning process based on an immune-enhanced neural network is proposed. The proposed bidirectional model can not only reveal the internal nonlinear relationship between the process configuration and the quality indices of the fibers as final product, but also provide a tool for engineers to develop new fiber products with expected quality specifications. A neural network is taken as the basis for the bidirectional model, and an immune component is introduced to enlarge the searching scope of the solution field so that the neural network has a larger possibility to find the appropriate and reasonable solution, and the error of prediction can therefore be eliminated. The proposed intelligent model can also help to determine what kind of process configuration should be made in order to produce satisfactory fiber products. To make the proposed model practical to the manufacturing, a software platform is developed. Simulation results show that the proposed model can eliminate the approximation error raised by the neural network-based optimizing model, which is due to the extension of focusing scope by the artificial immune mechanism. Meanwhile, the proposed model with the corresponding software can conduct optimization in two directions, namely, the process optimization and category development, and the corresponding results outperform those with an ordinary neural network-based intelligent model. It is also proved that the proposed model has the potential to act as a valuable tool from which the engineers and decision makers of the spinning process could benefit.

  1. Optimal lightpath placement on a metropolitan-area network linked with optical CDMA local nets

    NASA Astrophysics Data System (ADS)

    Wang, Yih-Fuh; Huang, Jen-Fa

    2008-01-01

    A flexible optical metropolitan-area network (OMAN) [J.F. Huang, Y.F. Wang, C.Y. Yeh, Optimal configuration of OCDMA-based MAN with multimedia services, in: 23rd Biennial Symposium on Communications, Queen's University, Kingston, Canada, May 29-June 2, 2006, pp. 144-148] structured with OCDMA linkage is proposed to support multimedia services with multi-rate or various qualities of service. To prioritize transmissions in OCDMA, the orthogonal variable spreading factor (OVSF) codes widely used in wireless CDMA are adopted. In addition, for feasible multiplexing, unipolar OCDMA modulation [L. Nguyen, B. Aazhang, J.F. Young, All-optical CDMA with bipolar codes, IEEE Electron. Lett. 31 (6) (1995) 469-470] is used to generate the code selector of multi-rate OMAN, and a flexible fiber-grating-based system is used for the equipment on OCDMA-OVSF code. These enable an OMAN to assign suitable OVSF codes when creating different-rate lightpaths. How to optimally configure a multi-rate OMAN is a challenge because of displaced lightpaths. In this paper, a genetically modified genetic algorithm (GMGA) [L.R. Chen, Flexible fiber Bragg grating encoder/decoder for hybrid wavelength-time optical CDMA, IEEE Photon. Technol. Lett. 13 (11) (2001) 1233-1235] is used to preplan lightpaths in order to optimally configure an OMAN. To evaluate the performance of the GMGA, we compared it with different preplanning optimization algorithms. Simulation results revealed that the GMGA very efficiently solved the problem.

  2. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  3. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  4. Simulation of groundwater flow and analysis of the effects of water-management options in the North Platte Natural Resources District, Nebraska

    USGS Publications Warehouse

    Peterson, Steven M.; Flynn, Amanda T.; Vrabel, Joseph; Ryter, Derek W.

    2015-08-12

    The calibrated groundwater-flow model was used with the Groundwater-Management Process for the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model, MODFLOW–2005, to provide a tool for the NPNRD to better understand how water-management decisions could affect stream base flows of the North Platte River at Bridgeport, Nebr., streamgage in a future period from 2008 to 2019 under varying climatic conditions. The simulation-optimization model was constructed to analyze the maximum increase in simulated stream base flow that could be obtained with the minimum amount of reductions in groundwater withdrawals for irrigation. A second analysis extended the first to analyze the simulated base-flow benefit of groundwater withdrawals along with application of intentional recharge, that is, water from canals being released into rangeland areas with sandy soils. With optimized groundwater withdrawals and intentional recharge, the maximum simulated stream base flow was 15–23 cubic feet per second (ft3/s) greater than with no management at all, or 10–15 ft3/s larger than with managed groundwater withdrawals only. These results indicate not only the amount that simulated stream base flow can be increased by these management options, but also the locations where the management options provide the most or least benefit to the simulated stream base flow. For the analyses in this report, simulated base flow was best optimized by reductions in groundwater withdrawals north of the North Platte River and in the western half of the area. Intentional recharge sites selected by the optimization had a complex distribution but were more likely to be closer to the North Platte River or its tributaries. Future users of the simulation-optimization model will be able to modify the input files as to type, location, and timing of constraints, decision variables of groundwater withdrawals by zone, and other variables to explore other feasible management scenarios that may yield different increases in simulated future base flow of the North Platte River.

  5. Proposing "the burns suite" as a novel simulation tool for advancing the delivery of burns education.

    PubMed

    Sadideen, Hazim; Wilson, David; Moiemen, Naiem; Kneebone, Roger

    2014-01-01

    Educational theory highlights the importance of contextualized simulation for effective learning. We explored this concept in a burns scenario in a novel, low-cost, high-fidelity, portable, immersive simulation environment (referred to as distributed simulation). This contextualized simulation/distributed simulation combination was named "The Burns Suite" (TBS). A pediatric burn resuscitation scenario was selected after high trainee demand. It was designed on Advanced Trauma and Life Support and Emergency Management of Severe Burns principles and refined using expert opinion through cognitive task analysis. TBS contained "realism" props, briefed nurses, and a simulated patient. Novices and experts were recruited. Five-point Likert-type questionnaires were developed for face and content validity. Cronbach's α was calculated for scale reliability. Semistructured interviews captured responses for qualitative thematic analysis allowing for data triangulation. Twelve participants completed TBS scenario. Mean face and content validity ratings were high (4.6 and 4.5, respectively; range, 4-5). The internal consistency of questions was high. Qualitative data analysis revealed that participants felt 1) the experience was "real" and they were "able to behave as if in a real resuscitation environment," and 2) TBS "addressed what Advanced Trauma and Life Support and Emergency Management of Severe Burns didn't" (including the efficacy of incorporating nontechnical skills). TBS provides a novel, effective simulation tool to significantly advance the delivery of burns education. Recreating clinical challenge is crucial to optimize simulation training. This low-cost approach also has major implications for surgical education, particularly during increasing financial austerity. Alternative scenarios and/or procedures can be recreated within TBS, providing a diverse educational immersive simulation experience.

  6. Framework to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas

    NASA Astrophysics Data System (ADS)

    Feyen, Luc; Gorelick, Steven M.

    2005-03-01

    We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.

  7. Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems

    NASA Astrophysics Data System (ADS)

    Kreuder, John J.

    Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.

  8. Optimization of a secondary VOI protocol for lung imaging in a clinical CT scanner.

    PubMed

    Larsen, Thomas C; Gopalakrishnan, Vissagan; Yao, Jianhua; Nguyen, Catherine P; Chen, Marcus Y; Moss, Joel; Wen, Han

    2018-05-21

    We present a solution to meet an unmet clinical need of an in-situ "close look" at a pulmonary nodule or at the margins of a pulmonary cyst revealed by a primary (screening) chest CT while the patient is still in the scanner. We first evaluated options available on current whole-body CT scanners for high resolution screening scans, including ROI reconstruction of the primary scan data and HRCT, but found them to have insufficient SNR in lung tissue or discontinuous slice coverage. Within the capabilities of current clinical CT systems, we opted for the solution of a secondary, volume-of-interest (VOI) protocol where the radiation dose is focused into a short-beam axial scan at the z position of interest, combined with a small-FOV reconstruction at the xy position of interest. The objective of this work was to design a VOI protocol that is optimized for targeted lung imaging in a clinical whole-body CT system. Using a chest phantom containing a lung-mimicking foam insert with a simulated cyst, we identified the appropriate scan mode and optimized both the scan and recon parameters. The VOI protocol yielded 3.2 times the texture amplitude-to-noise ratio in the lung-mimicking foam when compared to the standard chest CT, and 8.4 times the texture difference between the lung mimicking and reference foams. It improved details of the wall of the simulated cyst and better resolution in a line-pair insert. The Effective Dose of the secondary VOI protocol was 42% on average and up to 100% in the worst-case scenario of VOI positioning relative to the standard chest CT. The optimized protocol will be used to obtain detailed CT textures of pulmonary lesions, which are biomarkers for the type and stage of lung diseases. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  9. Hydroeconomic DSS for optimal hydrology-oriented forest management in semiarid areas

    NASA Astrophysics Data System (ADS)

    Garcia-Prats, A.; del Campo, A.; Pulido-Velazquez, M.

    2016-12-01

    In semiarid regions like the Mediterranean, managing the upper-catchment forests for water provision goals (hydrology-oriented silviculture) offers a strategy to increase the resilience of catchments to droughts and lower precipitation and higher evapotranspiration due to climate change. Understanding the effects of forest management on vegetation water use and groundwater recharge is particularly important in those regions. Despite the essential role that forests play in the water cycle on the provision of water resources, this contribution is often neither quantified nor explicitly valued. The aim of this work is to develop a novel decision support system (DSS) based on hydro-economic modelling, for assessing and designing the optimal integrated forest and water management for forested catchments. Hydro-economic modelling may support the design of economically efficient strategies integrating the hydrologic, engineering, environmental and economic aspects of water resources systems within a coherent framework. The optimization model explicitly integrates changes in water yield (increase n groundwater recharge) induced by the management of forest density, and the value of the additional water provided to the system. This latter component could serve as an indicator for the design of a "payment for environmental services" scheme in which groundwater beneficiaries could contribute towards funding and promoting efficient forest management operations. Besides, revenues from timber logging are also articulated in the modelling. The case study was an Aleppo pine forest in south-western Valencia province (Spain), using a typical 100-year rotation horizon. The model determines the optimal schedule of thinning interventions in the stands in order to maximize the total net benefits in the system (timber and water). Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs were modelled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. This research reveal the potential of integrated water and forest policies and encourage their application by governments and policy makers.

  10. A Model for Designing Adaptive Laboratory Evolution Experiments.

    PubMed

    LaCroix, Ryan A; Palsson, Bernhard O; Feist, Adam M

    2017-04-15

    The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes with available resources (i.e., laboratory supplies, personnel, and time). To design and to better understand ALE experiments, a simulator, ALEsim, was developed, validated, and applied to the optimization of ALE experiments. The effects of various passage sizes were experimentally determined and subsequently evaluated with ALEsim, to explain differences in experimental outcomes. Furthermore, a beneficial mutation rate of 10 -6.9 to 10 -8.4 mutations per cell division was derived. A retrospective analysis of ALE experiments revealed that passage sizes typically employed in serial passage batch culture ALE experiments led to inefficient production and fixation of beneficial mutations. ALEsim and the results described here will aid in the design of ALE experiments to fit the exact needs of a project while taking into account the resources required and will lower the barriers to entry for this experimental technique. IMPORTANCE ALE is a widely used scientific technique to increase scientific understanding, as well as to create industrially relevant organisms. The manner in which ALE experiments are conducted is highly manual and uniform, with little optimization for efficiency. Such inefficiencies result in suboptimal experiments that can take multiple months to complete. With the availability of automation and computer simulations, we can now perform these experiments in an optimized fashion and can design experiments to generate greater fitness in an accelerated time frame, thereby pushing the limits of what adaptive laboratory evolution can achieve. Copyright © 2017 American Society for Microbiology.

  11. Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers

    NASA Astrophysics Data System (ADS)

    Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard

    2018-03-01

    In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.

  12. A theoretical comparison of evolutionary algorithms and simulated annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less

  13. OPTESIM, a Versatile Toolbox for Numerical Simulation of Electron Spin Echo Envelope Modulation (ESEEM) that Features Hybrid Optimization and Statistical Assessment of Parameters

    PubMed Central

    Sun, Li; Hernandez-Guzman, Jessica; Warncke, Kurt

    2009-01-01

    Electron spin echo envelope modulation (ESEEM) is a technique of pulsed-electron paramagnetic resonance (EPR) spectroscopy. The analyis of ESEEM data to extract information about the nuclear and electronic structure of a disordered (powder) paramagnetic system requires accurate and efficient numerical simulations. A single coupled nucleus of known nuclear g value (gN) and spin I=1 can have up to eight adjustable parameters in the nuclear part of the spin Hamiltonian. We have developed OPTESIM, an ESEEM simulation toolbox, for automated numerical simulation of powder two- and three-pulse one-dimensional ESEEM for arbitrary number (N) and type (I, gN) of coupled nuclei, and arbitrary mutual orientations of the hyperfine tensor principal axis systems for N>1. OPTESIM is based in the Matlab environment, and includes the following features: (1) a fast algorithm for translation of the spin Hamiltonian into simulated ESEEM, (2) different optimization methods that can be hybridized to achieve an efficient coarse-to-fine grained search of the parameter space and convergence to a global minimum, (3) statistical analysis of the simulation parameters, which allows the identification of simultaneous confidence regions at specific confidence levels. OPTESIM also includes a geometry-preserving spherical averaging algorithm as default for N>1, and global optimization over multiple experimental conditions, such as the dephasing time ( ) for three-pulse ESEEM, and external magnetic field values. Application examples for simulation of 14N coupling (N=1, N=2) in biological and chemical model paramagnets are included. Automated, optimized simulations by using OPTESIM lead to a convergence on dramatically shorter time scales, relative to manual simulations. PMID:19553148

  14. Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU

    NASA Astrophysics Data System (ADS)

    Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei

    2013-09-01

    The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.

  15. Simulation and optimization of a coking wastewater biological treatment process by activated sludge models (ASM).

    PubMed

    Wu, Xiaohui; Yang, Yang; Wu, Gaoming; Mao, Juan; Zhou, Tao

    2016-01-01

    Applications of activated sludge models (ASM) in simulating industrial biological wastewater treatment plants (WWTPs) are still difficult due to refractory and complex components in influents as well as diversity in activated sludges. In this study, an ASM3 modeling study was conducted to simulate and optimize a practical coking wastewater treatment plant (CWTP). First, respirometric characterizations of the coking wastewater and CWTP biomasses were conducted to determine the specific kinetic and stoichiometric model parameters for the consecutive aeration-anoxic-aeration (O-A/O) biological process. All ASM3 parameters have been further estimated and calibrated, through cross validation by the model dynamic simulation procedure. Consequently, an ASM3 model was successfully established to accurately simulate the CWTP performances in removing COD and NH4-N. An optimized CWTP operation condition could be proposed reducing the operation cost from 6.2 to 5.5 €/m(3) wastewater. This study is expected to provide a useful reference for mathematic simulations of practical industrial WWTPs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Parallel tempering simulation of the three-dimensional Edwards-Anderson model with compact asynchronous multispin coding on GPU

    NASA Astrophysics Data System (ADS)

    Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark

    2014-10-01

    Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.

  17. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  18. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  19. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  20. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  1. Investigation of the Carbon Arc Source as an AM0 Solar Simulator for Use in Characterizing Multi-Junction Solar Cells

    NASA Technical Reports Server (NTRS)

    Xu, Jianzeng; Woodyward, James R.

    2005-01-01

    The operation of multi-junction solar cells used for production of space power is critically dependent on the spectral irradiance of the illuminating light source. Unlike single-junction cells where the spectral irradiance of the simulator and computational techniques may be used to optimized cell designs, optimization of multi-junction solar cell designs requires a solar simulator with a spectral irradiance that closely matches AM0.

  2. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  3. Optimized deformation behavior of a dielectric elastomer generator

    NASA Astrophysics Data System (ADS)

    Foerster, Florentine; Schlaak, Helmut F.

    2014-03-01

    Dielectric elastomer generators (DEGs) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires an optimal deformation of the DEG during the energy harvesting cycle. However, the deformation resulting from an external load has to be applied to the DEG. The deformation behavior of the DEG is dependent on the type of the mechanical interconnection between the elastic DEG and a stiff support area. The maximization of the capacitance of the DEG in the deformed state leads to the maximum absolute energy gain. Therefore several configurations of mechanical interconnections between a single DEG module as well as multiple stacked DEG modules and stiff supports are investigated in order to find the optimal mechanical interconnection. The investigation is done with numerical simulations using the FEM software ANSYS. A DEG module consists of 50 active dielectric layers with a single layer thickness of 50 μm. The elastomer material is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes are included to compare simulation results to experimental investigations in the future. The numerical simulations of the several configurations are carried out as coupled electro-mechanical simulation for the first step in an energy harvesting cycle with constant external load strain. The simulation results are discussed and an optimal mechanical interconnection between DEG modules and stiff supports is derived.

  4. Modeling hospital surgical delivery process design using system simulation: optimizing patient flow and bed capacity as an illustration.

    PubMed

    Kumar, Sameer

    2011-01-01

    It is increasingly recognized that hospital operation is an intricate system with limited resources and many interacting sources of both positive and negative feedback. The purpose of this study is to design a surgical delivery process in a county hospital in the U.S where patient flow through a surgical ward is optimized. The system simulation modeling is used to address questions of capacity planning, throughput management and interacting resources which constitute the constantly changing complexity that characterizes designing a contemporary surgical delivery process in a hospital. The steps in building a system simulation model is demonstrated using an example of building a county hospital in a small city in the US. It is used to illustrate a modular system simulation modeling of patient surgery process flows. The system simulation model development will enable planners and designers how they can build in overall efficiencies in a healthcare facility through optimal bed capacity for peak patient flow of emergency and routine patients.

  5. Numerical and experimental analysis of high frequency acoustic microscopy and infrared reflectance system for early detection of melanoma

    NASA Astrophysics Data System (ADS)

    Karagiannis, Georgios; Apostolidis, Georgios; Georgoulias, Panagiotis

    2016-03-01

    Melanoma is a very malicious type of cancer as it metastasizes early and hence its late diagnosis leads to death. Consequently, early diagnosis of melanoma and its removal is considered the most effective way of treatment. We present a design of a high frequency acoustic microscopy and infrared reflectance system for the early detection of melanoma. Specifically, the identification of morphological changes related to carcinogenesis is required. In this work, we simulate of the propagation of the ultrasonic waves of the order of 100 MHz as well as of electromagnetic waves of the order of 100 THz in melanoma structures targeting to the estimation and optimization of the basic characteristics of the systems. The simulation results of the acoustic microscopy subsystem aim to provide information such as the geometry of the transducer, the center frequency of operation, the focal length where the power transmittance is optimum and the spot size in focal length. As far as the infrared is concerned the optimal frequency range and the spot illumination size of the external probe is provided. This information is next used to assemble a properly designed system which is applied to melanoma phantoms as well as real skin lesions. Finally, the measurement data are visualized to reveal the information of the experimented structures, proving noteworthy accuracy.

  6. Computer Simulation of a Multiaxis Air-to-Air Tracking Task Using the Optimal Pilot Control Model.

    DTIC Science & Technology

    1982-12-01

    v ABSTRACT ........ ............................. .. vi CHAPTER 1 - INTRODUCTION ....... ..................... 1 1.1 Motivation... Introduction ......... . 4 2.2 Optimal Pilot Control Model and Control Synthesis 4 2.3 Pitch Tracking Task ...... ................... 6 2.4 Multiaxis...CHAPTER 3 - SIMULATION SYSTEM ...... .................. 33 3.1 Introduction ........ ....................... 33 3.2 System Hardware

  7. A Simulation Optimization Approach to Epidemic Forecasting

    PubMed Central

    Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  8. A Simulation Optimization Approach to Epidemic Forecasting.

    PubMed

    Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area.

  9. Development of cost-effective surfactant flooding technology, Quarterly report, October 1995--December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1995-12-31

    The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less

  10. Comparative simulation study of chemical synthesis of functional DADNE material.

    PubMed

    Liu, Min Hsien; Liu, Chuan Wen

    2017-01-01

    Amorphous molecular simulation to model the reaction species in the synthesis of chemically inert and energetic 1,1-diamino-2,2-dinitroethene (DADNE) explosive material was performed in this work. Nitromethane was selected as the starting reactant to undergo halogenation, nitration, deprotonation, intermolecular condensation, and dehydration to produce the target DADNE product. The Materials Studio (MS) forcite program allowed fast energy calculations and reliable geometric optimization of all aqueous molecular reaction systems (0.1-0.5 M) at 283 K and 298 K. The MS forcite-computed and Gaussian polarizable continuum model (PCM)-computed results were analyzed and compared in order to explore feasible reaction pathways under suitable conditions for the synthesis of DADNE. Through theoretical simulation, the findings revealed that synthesis was possible, and a total energy barrier of 449.6 kJ mol -1 needed to be overcome in order to carry out the reaction according to MS calculation of the energy barriers at each stage at 283 K, as shown by the reaction profiles. Local analysis of intermolecular interaction, together with calculation of the stabilization energy of each reaction system, provided information that can be used as a reference regarding molecular integrated stability. Graphical Abstract Materials Studio software has been suggested for the computation and simulation of DADNE synthesis.

  11. Measurement and prediction of indoor air quality using a breathing thermal manikin.

    PubMed

    Melikov, A; Kaczmarczyk, J

    2007-02-01

    The analyses performed in this paper reveal that a breathing thermal manikin with realistic simulation of respiration including breathing cycle, pulmonary ventilation rate, frequency and breathing mode, gas concentration, humidity and temperature of exhaled air and human body shape and surface temperature is sensitive enough to perform reliable measurement of characteristics of air as inhaled by occupants. The temperature, humidity, and pollution concentration in the inhaled air can be measured accurately with a thermal manikin without breathing simulation if they are measured at the upper lip at a distance of <0.01 m from the face. Body surface temperature, shape and posture as well as clothing insulation have impact on the measured inhaled air parameters. Proper simulation of breathing, especially of exhalation, is needed for studying the transport of exhaled air between occupants. A method for predicting air acceptability based on inhaled air parameters and known exposure-response relationships established in experiments with human subjects is suggested. Recommendations for optimal simulation of human breathing by means of a breathing thermal manikin when studying pollution concentration, temperature and humidity of the inhaled air as well as the transport of exhaled air (which may carry infectious agents) between occupants are outlined. In order to compare results obtained with breathing thermal manikins, their nose and mouth geometry should be standardized.

  12. Design and fabrication of vibration based energy harvester using microelectromechanical system piezoelectric cantilever for low power applications.

    PubMed

    Kim, Moonkeun; Lee, Sang-Kyun; Yang, Yil Suk; Jeong, Jaehwa; Min, Nam Ki; Kwon, Kwang-Ho

    2013-12-01

    We fabricated dual-beam cantilevers on the microelectromechanical system (MEMS) scale with an integrated Si proof mass. A Pb(Zr,Ti)O3 (PZT) cantilever was designed as a mechanical vibration energy-harvesting system for low power applications. The resonant frequency of the multilayer composition cantilevers were simulated using the finite element method (FEM) with parametric analysis carried out in the design process. According to simulations, the resonant frequency, voltage, and average power of a dual-beam cantilever was 69.1 Hz, 113.9 mV, and 0.303 microW, respectively, at optimal resistance and 0.5 g (gravitational acceleration, m/s2). Based on these data, we subsequently fabricated cantilever devices using dual-beam cantilevers. The harvested power density of the dual-beam cantilever compared favorably with the simulation. Experiments revealed the resonant frequency, voltage, and average power density to be 78.7 Hz, 118.5 mV, and 0.34 microW, respectively. The error between the measured and simulated results was about 10%. The maximum average power and power density of the fabricated dual-beam cantilever at 1 g were 0.803 microW and 1322.80 microW cm(-3), respectively. Furthermore, the possibility of a MEMS-scale power source for energy conversion experiments was also tested.

  13. Model-based evaluation of struvite recovery from an in-line stripper in a BNR process (BCFS).

    PubMed

    Hao, X D; van Loosdrecht, M C M

    2006-01-01

    Phosphate removal and recovery can be combined in BNR processes. This may be realised by struvite precipitation from the supernatant of the sludge in anaerobic compartments. This can be beneficial for either improving bio-P removal effluent quality or lowering the influent COD/P ratio required for bio-P removal. For this reason, a patented BNR process, BCFS, was developed and applied in The Netherlands. Several questions relating to P-recovery and behaviour of the system remain unclear and need to be ascertained. For this purpose, a modelling technique was employed in this study. With the help of a previous developed model describing carbon oxidation and nutrient removal, three cases were fully simulated. The simulations demonstrated that there was an optimal stripping flow rate and P-recovery would increase in costs and bio-P activity might be negatively affected due to decreased bio-P efficiency if this value was exceeded. The simulations indicated that the minimal COD(biod)/P ratio required for the effluent standard (1 g P/m3) could be lowered from 20 to 10 with 36% of P-recovery. A simulation with dynamic inflow revealed that the dynamic influent loads affected slightly the anaerobic supernatant phosphate concentration but the effluent phosphate concentration would not be affected with regular P-recovery.

  14. Simulation and Optimization Methods for Assessing the Impact of Aviation Operations on the Environment

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chen, Neil; Ng, Hok K.

    2010-01-01

    There is increased awareness of anthropogenic factors affecting climate change and urgency to slow the negative impact. Greenhouse gases, oxides of Nitrogen and contrails resulting from aviation affect the climate in different and uncertain ways. This paper develops a flexible simulation and optimization software architecture to study the trade-offs involved in reducing emissions. The software environment is used to conduct analysis of two approaches for avoiding contrails using the concepts of contrail frequency index and optimal avoidance trajectories.

  15. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  16. Optimization of wastewater treatment plant operation for greenhouse gas mitigation.

    PubMed

    Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C

    2015-11-01

    This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Piezoresistive Cantilever Performance—Part II: Optimization

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.

    2010-01-01

    Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323

  18. A proposed simulation optimization model framework for emergency department problems in public hospital

    NASA Astrophysics Data System (ADS)

    Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid

    2015-12-01

    The Emergency Department (ED) is a very complex system with limited resources to support increase in demand. ED services are considered as good quality if they can meet the patient's expectation. Long waiting times and length of stay is always the main problem faced by the management. The management of ED should give greater emphasis on their capacity of resources in order to increase the quality of services, which conforms to patient satisfaction. This paper is a review of work in progress of a study being conducted in a government hospital in Selangor, Malaysia. This paper proposed a simulation optimization model framework which is used to study ED operations and problems as well as to find an optimal solution to the problems. The integration of simulation and optimization is hoped can assist management in decision making process regarding their resource capacity planning in order to improve current and future ED operations.

  19. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  20. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

Top