Sample records for obtain optimal values

  1. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are

  2. [VALUE OF SMART PHONE Scoliometer SOFTWARE IN OBTAINING OPTIMAL LUMBAR LORDOSIS DURING L4-S1 FUSION SURGERY].

    PubMed

    Yu, Weibo; Liang, De; Ye, Linqiang; Jiang, Xiaobing; Yao, Zhensong; Tang, Jingjing; Tang, Yongchao

    2015-10-01

    To investigate the value of smart phone Scoliometer software in obtaining optimal lumbar lordosis (LL) during L4-S1 fusion surgery. Between November 2014 and February 2015, 20 patients scheduled for L4-S1 fusion surgery were prospectively enrolled the study. There were 8 males and 12 females, aged 41-65 years (mean, 52.3 years). The disease duration ranged from 6 months to 6 years (mean, 3.4 years). Before operation, the pelvic incidence (PI) and Cobb angle of L4-S1 (CobbL4-S1) were measured on lateral X-ray film of lumbosacral spine by PACS system; and the ideal CobbL4-S1 was then calculated according to previously published methods [(PI+9 degrees) x 70%]. Subsequently, intraoperative CobbL4-S1 was monitored by the Scoliometer software and was defined as optimal while it was less than 5 degrees difference compared with ideal CobbL4-S1. Finally, the CobbL4-S1 was measured by the PACS system after operation and the consistency was compared between Scoliometer software and PACS system to evaluate the accuracy of this software. In addition, value of this method in obtaining optimal LL was validated by comparing the difference between ideal CobbL4-S1 and preoperative one with that between ideal CobbL4-S1 and postoperative one. The CobbL4-S1 was (36.17 ± 1.53)degrees for ideal one, (22.57 ± 5.50)degrees for preoperative one, (32.25 ± 1.46)degrees for intraoperative one measured by Scoliometer software, and (34.43 ± 1.72)degrees for postoperative one, respectively. The observed intraclass correlation coefficient (ICC) was excellent [ICC = 0.96, 95% confidence interval (0.93, 0.97)] and the mean absolute difference (MAD) was low (MAD = 1.23) between Scoliometer software and PACS system. The deviation between ideal CobbL4-S1 and postoperative CobbL4-S1 was (2.31 ± 0.23)degrees, which was significantly lower than the deviation between ideal CobbL4-S1 and preoperative CobbL4-S1 (13.60 ± 1.85)degrees (t = 6.065, P = 0.001). Scoliometer software can help surgeon obtain

  3. Optimization of b-value distribution for biexponential diffusion-weighted MR imaging of normal prostate.

    PubMed

    Jambor, Ivan; Merisaari, Harri; Aronen, Hannu J; Järvinen, Jukka; Saunavaara, Jani; Kauko, Tommi; Borra, Ronald; Pesola, Marko

    2014-05-01

    To determine the optimal b-value distribution for biexponential diffusion-weighted imaging (DWI) of normal prostate using both a computer modeling approach and in vivo measurements. Optimal b-value distributions for the fit of three parameters (fast diffusion Df, slow diffusion Ds, and fraction of fast diffusion f) were determined using Monte-Carlo simulations. The optimal b-value distribution was calculated using four individual optimization methods. Eight healthy volunteers underwent four repeated 3 Tesla prostate DWI scans using both 16 equally distributed b-values and an optimized b-value distribution obtained from the simulations. The b-value distributions were compared in terms of measurement reliability and repeatability using Shrout-Fleiss analysis. Using low noise levels, the optimal b-value distribution formed three separate clusters at low (0-400 s/mm2), mid-range (650-1200 s/mm2), and high b-values (1700-2000 s/mm2). Higher noise levels resulted into less pronounced clustering of b-values. The clustered optimized b-value distribution demonstrated better measurement reliability and repeatability in Shrout-Fleiss analysis compared with 16 equally distributed b-values. The optimal b-value distribution was found to be a clustered distribution with b-values concentrated in the low, mid, and high ranges and was shown to improve the estimation quality of biexponential DWI parameters of in vivo experiments. Copyright © 2013 Wiley Periodicals, Inc.

  4. Optimizing Methods of Obtaining Stellar Parameters for the H3 Survey

    NASA Astrophysics Data System (ADS)

    Ivory, KeShawn; Conroy, Charlie; Cargile, Phillip

    2018-01-01

    The Stellar Halo at High Resolution with Hectochelle Survey (H3) is in the process of observing and collecting stellar parameters for stars in the Milky Way's halo. With a goal of measuring radial velocities for fainter stars, it is crucial that we have optimal methods of obtaining this and other parameters from the data from these stars.The method currently developed is The Payne, named after Cecilia Payne-Gaposchkin, a code that uses neural networks and Markov Chain Monte Carlo methods to utilize both spectra and photometry to obtain values for stellar parameters. This project was to investigate the benefit of fitting both spectra and spectral energy distributions (SED). Mock spectra using the parameters of the Sun were created and noise was inserted at various signal to noise values. The Payne then fit each mock spectrum with and without a mock SED also generated from solar parameters. The result was that at high signal to noise, the spectrum dominated and the effect of fitting the SED was minimal. But at low signal to noise, the addition of the SED greatly decreased the standard deviation of the data and resulted in more accurate values for temperature and metallicity.

  5. Weak-value amplification as an optimal metrological protocol

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Escher, B. M.; de Matos Filho, R. L.; Zagury, N.; Davidovich, L.

    2015-06-01

    The implementation of weak-value amplification requires the pre- and postselection of states of a quantum system, followed by the observation of the response of the meter, which interacts weakly with the system. Data acquisition from the meter is conditioned to successful postselection events. Here we derive an optimal postselection procedure for estimating the coupling constant between system and meter and show that it leads both to weak-value amplification and to the saturation of the quantum Fisher information, under conditions fulfilled by all previously reported experiments on the amplification of weak signals. For most of the preselected states, full information on the coupling constant can be extracted from the meter data set alone, while for a small fraction of the space of preselected states, it must be obtained from the postselection statistics.

  6. Optimal policy for value-based decision-making.

    PubMed

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  7. Optimal policy for value-based decision-making

    PubMed Central

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  8. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less

  9. Optimization of the transmission of observable expectation values and observable statistics in continuous-variable teleportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albano Farias, L.; Stephany, J.

    2010-12-15

    We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less

  10. Assessing the system value of optimal load shifting

    DOE PAGES

    Merrick, James; Ye, Yinyu; Entriken, Bob

    2017-04-30

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  11. Assessing the system value of optimal load shifting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrick, James; Ye, Yinyu; Entriken, Bob

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  12. Optimal waist circumference cutoff value for defining the metabolic syndrome in postmenopausal Latin American women.

    PubMed

    Blümel, Juan E; Legorreta, Deborah; Chedraui, Peter; Ayala, Felix; Bencosme, Ascanio; Danckers, Luis; Lange, Diego; Espinoza, Maria T; Gomez, Gustavo; Grandia, Elena; Izaguirre, Humberto; Manriquez, Valentin; Martino, Mabel; Navarro, Daysi; Ojeda, Eliana; Onatra, William; Pozzo, Estela; Prada, Mariela; Royer, Monique; Saavedra, Javier M; Sayegh, Fabiana; Tserotas, Konstantinos; Vallejo, Maria S; Zuñiga, Cristina

    2012-04-01

    The aim of this study was to determine an optimal waist circumference (WC) cutoff value for defining the metabolic syndrome (METS) in postmenopausal Latin American women. A total of 3,965 postmenopausal women (age, 45-64 y), with self-reported good health, attending routine consultation at 12 gynecological centers in major Latin American cities were included in this cross-sectional study. Modified guidelines of the US National Cholesterol Education Program, Adult Treatment Panel III were used to assess METS risk factors. Receiver operator characteristic curve analysis was used to obtain an optimal WC cutoff value best predicting at least two other METS components. Optimal cutoff values were calculated by plotting the true-positive rate (sensitivity) against the false-positive rate (1 - specificity). In addition, total accuracy, distance to receiver operator characteristic curve, and the Youden Index were calculated. Of the participants, 51.6% (n = 2,047) were identified as having two or more nonadipose METS risk components (excluding a positive WC component). These women were older, had more years since menopause onset, used hormone therapy less frequently, and had higher body mass indices than women with fewer metabolic risk factors. The optimal WC cutoff value best predicting at least two other METS components was determined to be 88 cm, equal to that defined by the Adult Treatment Panel III. A WC cutoff value of 88 cm is optimal for defining METS in this postmenopausal Latin American series.

  13. WE-G-18C-02: Estimation of Optimal B-Value Set for Obtaining Apparent Diffusion Coefficient Free From Perfusion in Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karki, K; Hugo, G; Ford, J

    2014-06-15

    Purpose: Diffusion-weighted MRI (DW-MRI) is increasingly being investigated for radiotherapy planning and response assessment. Selection of a limited number of b-values in DW-MRI is important to keep geometrical variations low and imaging time short. We investigated various b-value sets to determine an optimal set for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADC IVIM) in nonsmall cell lung cancer. Methods: Seven patients had 27 DW-MRI scans before and during radiotherapy in a 1.5T scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR=4500ms approximately, TE=74ms, pixel size=1.98X1.98mm{sub 2}, slice thickness=4–6mm andmore » 7 axial slices. Diffusion gradients were applied to all three axes producing traceweighted images with eight b-values of 0–1000μs/μm{sup 2}. Monoexponential model ADC values using various b-value sets were compared to ADC IVIM using all b-values. To compare the relative noise in ADC maps, intra-scan coefficient of variation (CV) of active tumor volumes was computed. Results: ADC IVIM, perfusion coefficient and perfusion fraction for tumor volumes were in the range of 880-1622 μm{sup 2}/s, 8119-33834 μm{sup 2}/s and 0.104–0.349, respectively. ADC values using sets of 250, 800 and 1000; 250, 650 and 1000; and 250–1000μs/μm{sup 2} only were not significantly different from ADC IVIM(p>0.05, paired t-test). Error in ADC values for 0–1000, 50–1000, 100–1000, 250–1000, 500–1000, and three b-value sets- 250, 500 and 1000; 250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2} were 15.0, 9.4, 5.6, 1.4, 11.7, 3.7, 2.0 and 0.2% relative to the reference-standard ADC IVIM, respectively. Mean intrascan CV was 20.2, 20.9, 21.9, 24.9, 32.6, 25.8, 25.4 and 24.8%, respectively, whereas that for ADC IVIM was 23.3%. Conclusion: ADC values of two 3 b-value sets (250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2

  14. A Novel, Real-Valued Genetic Algorithm for Optimizing Radar Absorbing Materials

    NASA Technical Reports Server (NTRS)

    Hall, John Michael

    2004-01-01

    A novel, real-valued Genetic Algorithm (GA) was designed and implemented to minimize the reflectivity and/or transmissivity of an arbitrary number of homogeneous, lossy dielectric or magnetic layers of arbitrary thickness positioned at either the center of an infinitely long rectangular waveguide, or adjacent to the perfectly conducting backplate of a semi-infinite, shorted-out rectangular waveguide. Evolutionary processes extract the optimal physioelectric constants falling within specified constraints which minimize reflection and/or transmission over the frequency band of interest. This GA extracted the unphysical dielectric and magnetic constants of three layers of fictitious material placed adjacent to the conducting backplate of a shorted-out waveguide such that the reflectivity of the configuration was 55 dB or less over the entire X-band. Examples of the optimization of realistic multi-layer absorbers are also presented. Although typical Genetic Algorithms require populations of many thousands in order to function properly and obtain correct results, verified correct results were obtained for all test cases using this GA with a population of only four.

  15. Obtaining high g-values with low degree expansion of the phasefunction

    NASA Astrophysics Data System (ADS)

    Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.

    1994-02-01

    Analytic theory of anisotropic random flight requires the expansion of phase-functions in spherical harmonics. The number of terms should be limited while a g value should be obtained that is as high as possible. We describe how such a phase function can be constructed for a given number N of spherical components of the phasefunction, while obtaining a maximum value of the asymmetry parameter g.

  16. Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kimura, Shinji

    In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.

  17. An optimized procedure for obtaining DNA from fired and unfired ammunition.

    PubMed

    Montpetit, Shawn; O'Donnell, Patrick

    2015-07-01

    Gun crimes are a significant problem facing law enforcement agencies. Traditional forensic examination of firearms involves comparisons of markings imparted to bullets and cartridge casings during the firing process. DNA testing of casings and cartridges may not be routinely done in crime laboratories due a variety of factors including the typically low amounts of DNA recovered. The San Diego Police Department (SDPD) Crime Laboratory conducted a study to optimize the collection and profiling of DNA from fired and unfired ammunition. The method was optimized to where interpretable DNA results were obtained for 26.1% of the total number of forensic casework evidence samples, and provided some insights into the level of secondary transfer that might be expected from this type of evidence. Briefly detailed are the results from the experimental study and the forensic casework analysis using the optimized process. Mixtures (samples having more DNA types than the loader's known genotype detected or visible at any marker) were obtained in 39.8% of research samples and the likely source of DNA mixtures is discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Obtaining orthotropic elasticity tensor using entries zeroing method.

    NASA Astrophysics Data System (ADS)

    Gierlach, Bartosz; Danek, Tomasz

    2017-04-01

    A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal

  19. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?

    PubMed

    Ochi, Kento; Kamiura, Moto

    2015-09-01

    A multi-armed bandit problem is a search problem on which a learning agent must select the optimal arm among multiple slot machines generating random rewards. UCB algorithm is one of the most popular methods to solve multi-armed bandit problems. It achieves logarithmic regret performance by coordinating balance between exploration and exploitation. Since UCB algorithms, researchers have empirically known that optimistic value functions exhibit good performance in multi-armed bandit problems. The terms optimistic or optimism might suggest that the value function is sufficiently larger than the sample mean of rewards. The first definition of UCB algorithm is focused on the optimization of regret, and it is not directly based on the optimism of a value function. We need to think the reason why the optimism derives good performance in multi-armed bandit problems. In the present article, we propose a new method, which is called Overtaking method, to solve multi-armed bandit problems. The value function of the proposed method is defined as an upper bound of a confidence interval with respect to an estimator of expected value of reward: the value function asymptotically approaches to the expected value of reward from the upper bound. If the value function is larger than the expected value under the asymptote, then the learning agent is almost sure to be able to obtain the optimal arm. This structure is called sand-sifter mechanism, which has no regrowth of value function of suboptimal arms. It means that the learning agent can play only the current best arm in each time step. Consequently the proposed method achieves high accuracy rate and low regret and some value functions of it can outperform UCB algorithms. This study suggests the advantage of optimism of agents in uncertain environment by one of the simplest frameworks. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. A new look on anomalous thermal gradient values obtained in South Portugal

    NASA Astrophysics Data System (ADS)

    Duque, M. R.; Malico, I.

    2012-04-01

    A NEW LOOK ON THE ANOMALOUS THERMAL GRADIENT VALUES OBTAINED IN SOUTH PORTUGAL Duque, M. R. and Malico, I. M. Physics Department, University of Évora, Rua Romão Ramalho, 59,7000-671, Évora, Portugal It is well known that soil temperatures can be altered by water circulation. In this paper, we study numerically this effect by simulating some aquifers occurring in South Portugal. At this location, the thermal gradient values obtained in boreholes with depths less than 200 m, range between 22 and 30 °C km-1. However, there, it is easy to find places where temperatures are around 30 °C, at depths of 100 m. The obtained thermal gradient values show an increase one day after raining and a decrease during the dry season. Additionally, the curve of temperature as function of depth showed no hot water inlet in the hole. The region studied shows a smooth topography due to intensive erosion, but it was affected by alpine and hercinian orogenies. As a result, a high topography in depth, with folds and wrinkles is present. The space between adjacent folds is now filled by small sedimentary basins. Aquifers existing in this region can reach considerable depths and return to depths near the surface, but hot springs in the area are scarce. Water temperature rises in depth, and when the speed is high enough high temperatures near the surface, due to water circulation, can be found. The ability of the fluid to flow through the system depends on topography relief, rock permeability and basal heat flow. In this study, the steady-state fluid flow and heat transfer by conduction and advection are modeled. Fractures in the medium are simulated by an equivalent porous medium saturated with liquid. Thermal conductivity values for the water and the rocks can vary in space .Porosities used have high values in the region of the aquifer, low values in the lower region of the model and intermediate values in the upper regions. The results obtained show that temperature anomaly values

  1. An efficient and practical approach to obtain a better optimum solution for structural optimization

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Yu; Huang, Jyun-Hao

    2013-08-01

    For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.

  2. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation

  3. Computational alternatives to obtain time optimal jet engine control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.

  4. Rethinking key–value store for parallel I/O optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He

    2015-01-26

    Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less

  5. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    NASA Astrophysics Data System (ADS)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  6. Optimal control theory for non-scalar-valued performance criteria. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gerring, H. P.

    1971-01-01

    The theory of optimal control for nonscalar-valued performance criteria is discussed. In the space, where the performance criterion attains its value, the relations better than, worse than, not better than, and not worse than are defined by a partial order relation. The notion of optimality splits up into superiority and non-inferiority, because worse than is not the complement of better than, in general. A superior solution is better than every other solution. A noninferior solution is not worse than any other solution. Noninferior solutions have been investigated particularly for vector-valued performance criteria. Superior solutions for non-scalar-valued performance criteria attaining their values in abstract partially ordered spaces are emphasized. The main result is the infimum principle which constitutes necessary conditions for a control to be a superior solution to an optimal control problem.

  7. Total sperm per ejaculate of men: obtaining a meaningful value or a mean value with appropriate precision.

    PubMed

    Amann, Rupert P; Chapman, Phillip L

    2009-01-01

    We retrospectively mined and modeled data to answer 3 questions. 1) Relative to an estimate based on approximately 20 semen samples, how imprecise is an estimate of an individual's total sperm per ejaculate (TSperm) based on 1 sample? 2) What is the impact of abstinence interval on TSperm and TSperm/h? 3) How many samples are needed to provide a meaningful estimate of an individual's mean TSperm or TSperm/h? Data were for 18-20 consecutive masturbation samples from each of 48 semen donors. Modeling exploited the gamma distribution of values for TSperm and a unique approach to project to future samples. Answers: 1) Within-individual coefficients of variation were similar for TSperm or TSperm/h abstinence and ranged from 17% to 51%; average approximately 34%. TSperm or TSperm/h in any individual sample from a given donor was between -20% and +20% of the mean value in 48% of 18-20 samples per individual. 2) For a majority of individuals, TSperm increased in a nearly linear manner through approximately 72 hours of abstinence. TSperm and TSperm/h after 18-36 hours' abstinence are high. To obtain meaningful values for diagnostic purposes and maximize distinction of individuals with relatively low or high sperm production, the requested abstinence should be 42-54 hours with an upper limit of 64 hours. For individuals producing few sperm, 7 days or more of abstinence might be appropriate to obtain sperm for insemination. 3) At least 3 samples from a hypothetical future subject are recommended for most applications. Assuming 60 hours' abstinence, 80% confidence limits for TSperm/h for 1, 3, or 6 samples would be 70%-163%, 80%-130%, or 85%-120% of the mean for observed values. In only approximately 50% of cases would TSperm/h for a single sample be within -16% and +30% of the true mean value for that subject. Pooling values for TSperm in samples obtained after 18-36 or 72-168 hours' abstinence with values for TSperm obtained after 42-64 hours is inappropriate. Reliance on

  8. Optimization and kinetic modeling of esterification of the oil obtained from waste plum stones as a pretreatment step in biodiesel production.

    PubMed

    Kostić, Milan D; Veličković, Ana V; Joković, Nataša M; Stamenković, Olivera S; Veljković, Vlada B

    2016-02-01

    This study reports on the use of oil obtained from waste plum stones as a low-cost feedstock for biodiesel production. Because of high free fatty acid (FFA) level (15.8%), the oil was processed through the two-step process including esterification of FFA and methanolysis of the esterified oil catalyzed by H2SO4 and CaO, respectively. Esterification was optimized by response surface methodology combined with a central composite design. The second-order polynomial equation predicted the lowest acid value of 0.53mgKOH/g under the following optimal reaction conditions: the methanol:oil molar ratio of 8.5:1, the catalyst amount of 2% and the reaction temperature of 45°C. The predicted acid value agreed with the experimental acid value (0.47mgKOH/g). The kinetics of FFA esterification was described by the irreversible pseudo first-order reaction rate law. The apparent kinetic constant was correlated with the initial methanol and catalyst concentrations and reaction temperature. The activation energy of the esterification reaction slightly decreased from 13.23 to 11.55kJ/mol with increasing the catalyst concentration from 0.049 to 0.172mol/dm(3). In the second step, the esterified oil reacted with methanol (methanol:oil molar ratio of 9:1) in the presence of CaO (5% to the oil mass) at 60°C. The properties of the obtained biodiesel were within the EN 14214 standard limits. Hence, waste plum stones might be valuable raw material for obtaining fatty oil for the use as alternative feedstock in biodiesel production. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optimizing Value and Avoiding Problems in Building Schools.

    ERIC Educational Resources Information Center

    Brevard County School Board, Cocoa, FL.

    This report describes school design and construction delivery processes used by the School Board of Brevard County (Cocoa, Florida) that help optimize value, avoid problems, and eliminate the cost of maintaining a large facility staff. The project phases are examined from project definition through design to construction. Project delivery…

  10. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    PubMed

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  11. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

    PubMed Central

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung

    2017-01-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617

  12. Determination of the optimal cutoff value for a serological assay: an example using the Johne's Absorbed EIA.

    PubMed Central

    Ridge, S E; Vizard, A L

    1993-01-01

    Traditionally, in order to improve diagnostic accuracy, existing tests have been replaced with newly developed diagnostic tests with superior sensitivity and specificity. However, it is possible to improve existing tests by altering the cutoff value chosen to distinguish infected individuals from uninfected individuals. This paper uses data obtained from an investigation of the operating characteristics of the Johne's Absorbed EIA to demonstrate a method of determining a preferred cutoff value from several potentially useful cutoff settings. A method of determining the financial gain from using the preferred rather than the current cutoff value and a decision analysis method to assist in determining the optimal cutoff value when critical population parameters are not known with certainty are demonstrated. The results of this study indicate that the currently recommended cutoff value for the Johne's Absorbed EIA is only close to optimal when the disease prevalence is very low and false-positive test results are deemed to be very costly. In other situations, there were considerable financial advantages to using cutoff values calculated to maximize the benefit of testing. It is probable that the current cutoff values for other diagnostic tests may not be the most appropriate for every testing situation. This paper offers methods for identifying the cutoff value that maximizes the benefit of medical and veterinary diagnostic tests. PMID:8501227

  13. Parametric optimal control of uncertain systems under an optimistic value criterion

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  14. On Revenue-Optimal Dynamic Auctions for Bidders with Interdependent Values

    NASA Astrophysics Data System (ADS)

    Constantin, Florin; Parkes, David C.

    In a dynamic market, being able to update one's value based on information available to other bidders currently in the market can be critical to having profitable transactions. This is nicely captured by the model of interdependent values (IDV): a bidder's value can explicitly depend on the private information of other bidders. In this paper we present preliminary results about the revenue properties of dynamic auctions for IDV bidders. We adopt a computational approach to design single-item revenue-optimal dynamic auctions with known arrivals and departures but (private) signals that arrive online. In leveraging a characterization of truthful auctions, we present a mixed-integer programming formulation of the design problem. Although a discretization is imposed on bidder signals the solution is a mechanism applicable to continuous signals. The formulation size grows exponentially in the dependence of bidders' values on other bidders' signals. We highlight general properties of revenue-optimal dynamic auctions in a simple parametrized example and study the sensitivity of prices and revenue to model parameters.

  15. Optimizing value utilizing Toyota Kata methodology in a multidisciplinary clinic.

    PubMed

    Merguerian, Paul A; Grady, Richard; Waldhausen, John; Libby, Arlene; Murphy, Whitney; Melzer, Lilah; Avansino, Jeffrey

    2015-08-01

    Value in healthcare is measured in terms of patient outcomes achieved per dollar expended. Outcomes and cost must be measured at the patient level to optimize value. Multidisciplinary clinics have been shown to be effective in providing coordinated and comprehensive care with improved outcomes, yet tend to have higher cost than typical clinics. We sought to lower individual patient cost and optimize value in a pediatric multidisciplinary reconstructive pelvic medicine (RPM) clinic. The RPM clinic is a multidisciplinary clinic that takes care of patients with anomalies of the pelvic organs. The specialties involved include Urology, General Surgery, Gynecology, and Gastroenterology/Motility. From May 2012 to November 2014 we performed time-driven activity-based costing (TDABC) analysis by measuring provider time for each step in the patient flow. Using observed time and the estimated hourly cost of each of the providers we calculated the final cost at the individual patient level, targeting clinic preparation. We utilized Toyota Kata methodology to enhance operational efficiency in an effort to optimize value. Variables measured included cost, time to perform a task, number of patients seen in clinic, percent value-added time (VAT) to patients (face to face time) and family experience scores (FES). At the beginning of the study period, clinic costs were $619 per patient. We reduced conference time from 6 min/patient to 1 min per patient, physician preparation time from 8 min to 6 min and increased Medical Assistant (MA) preparation time from 9.5 min to 20 min, achieving a cost reduction of 41% to $366 per patient. Continued improvements further reduced the MA preparation time to 14 min and the MD preparation time to 5 min with a further cost reduction to $194 (69%) (Figure). During this study period, we increased the number of appointments per clinic. We demonstrated sustained improvement in FES with regards to the families overall experience with their providers

  16. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  17. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  18. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  19. Diffusion-weighted MR imaging of the pancreas: optimizing b-value for visualization of pancreatic adenocarcinoma.

    PubMed

    Fukukura, Yoshihiko; Shindo, Toshikazu; Hakamada, Hiroto; Takumi, Koji; Umanodan, Tomokazu; Nakajo, Masanori; Kamimura, Kiyoshisa; Umanodan, Aya; Ideue, Junnichi; Yoshiura, Takashi

    2016-10-01

    To determine the optimal b-value of 3.0-T diffusion-weighted imaging (DWI) for visualizing pancreatic adenocarcinomas Fifty-five patients with histologically confirmed pancreatic adenocarcinoma underwent DWI with different b-values (b = 500, 1000, 1500, and 2000 s/mm(2)) at 3.0 T. For each b-value, we retrospectively evaluated DWI findings of pancreatic adenocarcinomas (clear hyperintensity relative to the surrounding pancreas, hyperintensity with an unclear distal border, and isointensity) and image quality, and measured tumour-to-pancreas signal intensity (SI) ratios. DWI findings, image quality, and tumour-to-pancreas SI ratios were compared between the four b-values. There was a significantly higher incidence of tumours showing clear hyperintensity on DWI with b-value of 1500 s/mm(2) than on that with b-value of 1000 s/mm(2) (P < 0.001), and on DWI with b-value of 1000 s/mm(2) than on that with b-value of 500 s/mm(2) (P < 0.001). The tumour-to-distal pancreas SI ratio was higher with b-value of 1500 s/mm(2) than with b-value of 1000 s/mm(2) (P < 0.001), and with b-value of 1000 s/mm(2) than with b-value of 500 s/mm(2) (P < 0.001). A lower image quality was obtained at increasing b-values (P < 0.001); the lowest scores were observed with b-value of 2000 s/mm(2). The use of b = 1500 s/mm(2) for 3.0-T DWI can improve the delineation of pancreatic adenocarcinomas. • Diffusion-weighted imaging (DWI) has been used for diagnosing pancreatic adenocarcinoma • The techniques for DWI, including the choice of b-values, vary considerably • DWI often fails to delineate pancreatic adenocarcinomas because of hyperintense pancreas • DWI with a higher b-value can improve the tumour delineation • The lowest image quality was obtained on DWI with b-value = 2000 s/mm (2).

  20. A framework for quantifying and optimizing the value of seismic monitoring of infrastructure

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr

    2017-04-01

    This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.

  1. Marine protected areas and the value of spatially optimized fishery management

    PubMed Central

    Rassweiler, Andrew; Costello, Christopher; Siegel, David A.

    2012-01-01

    There is a growing focus around the world on marine spatial planning, including spatial fisheries management. Some spatial management approaches are quite blunt, as when marine protected areas (MPAs) are established to restrict fishing in specific locations. Other management tools, such as zoning or spatial user rights, will affect the distribution of fishing effort in a more nuanced manner. Considerable research has focused on the ability of MPAs to increase fishery returns, but the potential for the broader class of spatial management approaches to outperform MPAs has received far less attention. We use bioeconomic models of seven nearshore fisheries in Southern California to explore the value of optimized spatial management in which the distribution of fishing is chosen to maximize profits. We show that fully optimized spatial management can substantially increase fishery profits relative to optimal nonspatial management but that the magnitude of this increase depends on characteristics of the fishing fleet and target species. Strategically placed MPAs can also increase profits substantially compared with nonspatial management, particularly if fishing costs are low, although profit increases available through optimal MPA-based management are roughly half those from fully optimized spatial management. However, if the same total area is protected by randomly placing MPAs, starkly contrasting results emerge: most random MPA designs reduce expected profits. The high value of spatial management estimated here supports continued interest in spatially explicit fisheries regulations but emphasizes that predicted increases in profits can only be achieved if the fishery is well understood and the regulations are strategically designed. PMID:22753469

  2. Marine protected areas and the value of spatially optimized fishery management.

    PubMed

    Rassweiler, Andrew; Costello, Christopher; Siegel, David A

    2012-07-17

    There is a growing focus around the world on marine spatial planning, including spatial fisheries management. Some spatial management approaches are quite blunt, as when marine protected areas (MPAs) are established to restrict fishing in specific locations. Other management tools, such as zoning or spatial user rights, will affect the distribution of fishing effort in a more nuanced manner. Considerable research has focused on the ability of MPAs to increase fishery returns, but the potential for the broader class of spatial management approaches to outperform MPAs has received far less attention. We use bioeconomic models of seven nearshore fisheries in Southern California to explore the value of optimized spatial management in which the distribution of fishing is chosen to maximize profits. We show that fully optimized spatial management can substantially increase fishery profits relative to optimal nonspatial management but that the magnitude of this increase depends on characteristics of the fishing fleet and target species. Strategically placed MPAs can also increase profits substantially compared with nonspatial management, particularly if fishing costs are low, although profit increases available through optimal MPA-based management are roughly half those from fully optimized spatial management. However, if the same total area is protected by randomly placing MPAs, starkly contrasting results emerge: most random MPA designs reduce expected profits. The high value of spatial management estimated here supports continued interest in spatially explicit fisheries regulations but emphasizes that predicted increases in profits can only be achieved if the fishery is well understood and the regulations are strategically designed.

  3. Some optimal considerations in attitude control systems. [evaluation of value of relative weighting between time and fuel for relay control law

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The conventional six-engine reaction control jet relay attitude control law with deadband is shown to be a good linear approximation to a weighted time-fuel optimal control law. Techniques for evaluating the value of the relative weighting between time and fuel for a particular relay control law is studied along with techniques to interrelate other parameters for the two control laws. Vehicle attitude control laws employing control moment gyros are then investigated. Steering laws obtained from the expression for the reaction torque of the gyro configuration are compared to a total optimal attitude control law that is derived from optimal linear regulator theory. This total optimal attitude control law has computational disadvantages in the solving of the matrix Riccati equation. Several computational algorithms for solving the matrix Riccati equation are investigated with respect to accuracy, computational storage requirements, and computational speed.

  4. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  5. Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    ERIC Educational Resources Information Center

    Ravinder, Handanhal V.

    2013-01-01

    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…

  6. Optimization of Penicillium aurantiogriseum protease immobilization on magnetic nanoparticles for antioxidant peptides' obtainment.

    PubMed

    Duarte Neto, José Manoel Wanderley; Maciel, Jackeline da Costa; Campos, Júlia Furtado; Carvalho Junior, Luiz Bezerra de; Marques, Daniela Araújo Viana; Lima, Carolina de Albuquerque; Porto, Ana Lúcia Figueiredo

    2017-08-09

    This work reports an optimization of protease from Penicillium aurantiogriseum immobilization on polyaniline-coated magnetic nanoparticles for antioxidant peptides' obtainment derived from bovine casein. Immobilization process was optimized using a full two-level factorial design (2 4 ) followed by a response surface methodology. Using the derivative, casein was hydrolyzed uncovering its peptides that were sequenced and had antioxidant properties tested through (2,2'-Azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt) (ABTS) radical scavenging and hydrogen peroxide scavenging assays. Optimal conditions for immobilization were 2 hr of immobilization, offered protein amount of 200 µg/mL, immobilization pH of 6.3 and 7.3 hr of activation. Derivative keeps over 74% of its original activity after reused five times. Free and immobilized enzyme casein hydrolysates presented similar peptide mass fingerprints, and prevalent peptides could be sequenced. Hydrolysates presented more than 2.5× higher ROS scavenging activity than nonhydrolyzed casein, which validates the immobilized protease capacity to develop casein-derived natural ingredients with potential for functional foods.

  7. Risk alignment in health care quality and financing: optimizing value.

    PubMed

    Granata, A V

    1998-01-01

    How should health care best consolidate rational cost control while preserving and enhancing quality? That is, how can a system best optimize value? A limitation of many current health management modalities may be that the power to control health spending has been expropriated from physician providers, while they are still fully responsible for quality. Assigning responsibility without authority is a significant predicament. There are growing indications that well-organized, well-managed groups of high quality physicians may be able to directly manage both types of risk-quality and financial. The best way to optimize responsibility and authority, and to control financial and quality risks, is to place such responsibility and authority within the same entity.

  8. Optimization of synthesis and peptization steps to obtain iron oxide nanoparticles with high energy dissipation rates

    NASA Astrophysics Data System (ADS)

    Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos

    2015-11-01

    Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.

  9. Design optimization of a prescribed vibration system using conjoint value analysis

    NASA Astrophysics Data System (ADS)

    Malinga, Bongani; Buckner, Gregory D.

    2016-12-01

    This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.

  10. New optimization scheme to obtain interaction potentials for oxide glasses

    NASA Astrophysics Data System (ADS)

    Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter

    2018-05-01

    We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.

  11. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  12. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    PubMed

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  13. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  14. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  15. Defining the optimal cut-off values for liver enzymes in diagnosing blunt liver injury.

    PubMed

    Koyama, Tomohide; Hamada, Hirohisa; Nishida, Masamichi; Naess, Paal A; Gaarder, Christine; Sakamoto, Tetsuya

    2016-01-25

    Patients with blunt trauma to the liver have elevated levels of liver enzymes within a short time post injury, potentially useful in screening patients for computed tomography (CT). This study was performed to define the optimal cut-off values for serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) in patients with blunt liver injury diagnosed with contrast enhanced multi detector-row CT (CE-MDCT). All patients admitted from May 2006 to July 2013 to Teikyo University Hospital Trauma and Critical Care Center, and who underwent abdominal CE-MDCT within 3 h after blunt trauma, were retrospectively enrolled. Using receiver operating characteristic (ROC) curve analysis, the optimal cut-off values for AST and ALT were defined, and sensitivity and specificity were calculated. Of a total of 676 blunt trauma patients 64 patients were diagnosed with liver injury (Group LI+) and 612 patients without liver injury (Group LI-). Group LI+ and LI- were comparable for age, Revised Trauma Score, and Probability of survival. The groups differed in Injury Severity Score [median 21 (interquartile range 9-33) vs. 17 (9-26) (p < 0.01)]. Group LI+ had higher AST than LI- [276 (48-503) vs. 44 (16-73); p < 0.001] and higher ALT [240 (92-388) vs. 32 (16-49); p < 0.001]. Using ROC curve analysis, the optimal cut-off values for AST and ALT were set at 109 U/l and 97 U/l, respectively. Based on these values, AST ≥ 109 U/l had a sensitivity of 81%, a specificity of 82%, a positive predictive value of 32%, and a negative predictive value of 98%. The corresponding values for ALT ≥ 97 U/l were 78, 88, 41 and 98%, respectively, and for the combination of AST ≥ 109 U/l and/or ALT ≥ 97 U/l were 84, 81, 32, 98%, respectively. We have identified AST ≥ 109 U/l and ALT ≥ 97 U/l as optimal cut-off values in predicting the presence of liver injury, potentially useful as a screening tool for CT scan in patients otherwise eligible for observation only or as a transfer

  16. Evaluation of optimized b-value sampling schemas for diffusion kurtosis imaging with an application to stroke patient data

    PubMed Central

    Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong

    2013-01-01

    Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303

  17. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  18. Obtaining the Optimal Dose in Alcohol Dependence Studies

    PubMed Central

    Wages, Nolan A.; Liu, Lei; O’Quigley, John; Johnson, Bankole A.

    2012-01-01

    In alcohol dependence studies, the treatment effect at different dose levels remains to be ascertained. Establishing this effect would aid us in identifying the best dose that has satisfactory efficacy while minimizing the rate of adverse events. We advocate the use of dose-finding methodology that has been successfully implemented in the cancer and HIV settings to identify the optimal dose in a cost-effective way. Specifically, we describe the continual reassessment method (CRM), an adaptive design proposed for cancer trials to reconcile the needs of dose-finding experiments with the ethical demands of established medical practice. We are applying adaptive designs for identifying the optimal dose of medications for the first time in the context of pharmacotherapy research in alcoholism. We provide an example of a topiramate trial as an illustration of how adaptive designs can be used to locate the optimal dose in alcohol treatment trials. It is believed that the introduction of adaptive design methods will enable the development of medications for the treatment of alcohol dependence to be accelerated. PMID:23189064

  19. Analysis of a quantitative PCR assay for CMV infection in liver transplant recipients: an intent to find the optimal cut-off value.

    PubMed

    Martín-Dávila, P; Fortún, J; Gutiérrez, C; Martí-Belda, P; Candelas, A; Honrubia, A; Barcena, R; Martínez, A; Puente, A; de Vicente, E; Moreno, S

    2005-06-01

    Preemptive therapy required highly predictive tests for CMV disease. CMV antigenemia assay (pp65 Ag) has been commonly used for rapid diagnosis of CMV infection. Amplification methods for early detection of CMV DNA are under analysis. To compare two diagnostic methods for CMV infection and disease in this population: quantitative PCR (qPCR) performed in two different samples, plasma and leukocytes (PMNs) and using a commercial diagnostic test (COBAS Amplicor Monitor Test) versus pp65 Ag. Prospective study conducted in liver transplant recipients from February 2000 to February 2001. Analyses were performed on 164 samples collected weekly during early post-transplant period from 33 patients. Agreements higher than 78% were observed between the three assays. Optimal qPCR cut-off values were calculated using ROC curves for two specific antigenemia values. For antigenemia >or=10 positive cells, the optimal cut-off value for qPCR in plasma was 1330 copies/ml, with a sensitivity (S) of 58% and a specificity (E) of 98% and the optimal cut-off value for qPCR-cells was 713 copies/5x10(6) cells (S:91.7% and E:86%). Using a threshold of antigenemia >or=20 positive cells, the optimal cut-off values were 1330 copies/ml for qPCR-plasma (S 87%; E 98%) and 4755 copies/5x10(6) cells for qPCR-cells (S 87.5%; E 98%). Prediction values for the three assays were calculated in patients with CMV disease (9 pts; 27%). Considering the assays in a qualitative way, the most sensitive was CMV PCR in cells (S: 100%, E: 54%, PPV: 40%; NPV: 100%). Using specific cut-off values for disease detection the sensitivity, specificity, PPV and NPV for antigenemia >or=10 positive cells were: 89%; 83%; 67%; 95%, respectively. For qPCR-cells >or=713 copies/5x10(6) cells: 100%; 54%; 33% and 100% and for plasma-qPCR>or=1330 copies/ml: 78%, 77%, 47%, 89% respectively. Optimal cut-off for viral load performed in plasma and cells can be obtained for the breakpoint antigenemia value recommended for initiating

  20. Optimizing isothiocyanate formation during enzymatic glucosinolate breakdown by adjusting pH value, temperature and dilution in Brassica vegetables and Arabidopsis thaliana

    NASA Astrophysics Data System (ADS)

    Hanschen, Franziska S.; Klopsch, Rebecca; Oliviero, Teresa; Schreiner, Monika; Verkerk, Ruud; Dekker, Matthijs

    2017-01-01

    Consumption of glucosinolate-rich Brassicales vegetables is associated with a decreased risk of cancer with enzymatic hydrolysis of glucosinolates playing a key role. However, formation of health-promoting isothiocyanates is inhibited by the epithiospecifier protein in favour of nitriles and epithionitriles. Domestic processing conditions, such as changes in pH value, temperature or dilution, might also affect isothiocyanate formation. Therefore, the influences of these three factors were evaluated in accessions of Brassica rapa, Brassica oleracea, and Arabidopsis thaliana. Mathematical modelling was performed to determine optimal isothiocyanate formation conditions and to obtain knowledge on the kinetics of the reactions. At 22 °C and endogenous plant pH, nearly all investigated plants formed nitriles and epithionitriles instead of health-promoting isothiocyanates. Response surface models, however, clearly demonstrated that upon change in pH to domestic acidic (pH 4) or basic pH values (pH 8), isothiocyanate formation considerably increases. While temperature also affects this process, the pH value has the greatest impact. Further, a kinetic model showed that isothiocyanate formation strongly increases due to dilution. Finally, the results show that isothiocyanate intake can be strongly increased by optimizing the conditions of preparation of Brassicales vegetables.

  1. Optimization of Pumpkin Oil Recovery by Using Aqueous Enzymatic Extraction and Comparison of the Quality of the Obtained Oil with the Quality of Cold-Pressed Oil

    PubMed Central

    Roszkowska, Beata; Czaplicki, Sylwester; Tańska, Małgorzata

    2016-01-01

    Summary The study was carried out to optimize pumpkin oil recovery in the process of aqueous extraction preceded by enzymatic maceration of seeds, as well as to compare the quality of the obtained oil to the quality of cold-pressed pumpkin seed oil. Hydrated pulp of hulless pumpkin seeds was macerated using a 2% (by mass) cocktail of commercial pectinolytic, cellulolytic and proteolytic preparations (Rohapect® UF, Rohament® CL and Colorase® 7089). The optimization procedure utilized response surface methodology based on Box- -Behnken plan of experiment. The optimized variables of enzymatic pretreatment were pH, temperature and maceration time. The results showed that the pH value, temperature and maceration time of 4.7, 54 °C and 15.4 h, respectively, were conducive to maximize the oil yield up to 72.64%. Among these variables, the impact of pH was crucial (above 73% of determined variation) for oil recovery results. The oil obtained by aqueous enzymatic extraction was richer in sterols, squalene and tocopherols, and only slightly less abundant in carotenoids than the cold-pressed one. However, it had a lower oxidative stability, with induction period shortened by approx. 30% in relation to the cold-pressed oil. PMID:28115898

  2. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  3. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  4. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  5. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  6. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  7. Impacts of Valuing Resilience on Cost-Optimal PV and Storage Systems for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laws, Nicholas D; Anderson, Katherine H; DiOrio, Nicholas A

    Decreasing electric grid reliability in the US, along with increasing severe weather events, have greatly increased interest in resilient energy systems. Few studies have included the value of resilience when sizing PV and Battery Energy Storage Systems (BESS), and none have included the cost to island a PV and BESS, grid-connected costs and benefits, and the value of resilience. This work presents a novel method for incorporating the value of resilience provided by a PV and BESS into a techno-economic optimization model. Including the value of resilience in the design of a cost-optimal PV and BESS generally increases the systemmore » capacities, and in some cases makes a system economical where it was not before. For example, for a large hotel in Anaheim, CA no system is economical without resilience valued; however, with a $5317/hr value of resilience a 363 kW and 60 kWh solar and BESS provides a net present value of $50,000. Lastly, we discuss the effect of the 'islandable premium', which must be balanced against the benefits from serving critical loads during outages. Case studies show that the islandable premium can vary widely, which highlights the necessity for case-by-case solutions in a rapidly developing market.« less

  8. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  9. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  10. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log

  11. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when

  12. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  13. The Medication Optimization Value Proposition: Aligning Teams and Education to Improve Care.

    PubMed

    Easter, Jon C; DeWalt, Darren A

    2017-01-01

    United States health care lags behind other countries in quality and cost. The present health care system is unsustainable, and there is now a quick movement toward value-based care. This article lays out essential care delivery elements, and makes the case for medication optimization to enable new value-based models. Success factors include enhancing team-based care and interdisciplinary education to achieve patient-centered care. ©2017 by the North Carolina Institute of Medicine and The Duke Endowment. All rights reserved.

  14. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  15. Optimal atomic structure of amorphous silicon obtained from density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Pedersen, Andreas; Pizzagalli, Laurent; Jónsson, Hannes

    2017-06-01

    Atomic structure of amorphous silicon consistent with several reported experimental measurements has been obtained from annealing simulations using electron density functional theory calculations and a systematic removal of weakly bound atoms. The excess energy and density with respect to the crystal are well reproduced in addition to radial distribution function, angular distribution functions, and vibrational density of states. No atom in the optimal configuration is locally in a crystalline environment as deduced by ring analysis and common neighbor analysis, but coordination defects are present at a level of 1%-2%. The simulated samples provide structural models of this archetypal disordered covalent material without preconceived notion of the atomic ordering or fitting to experimental data.

  16. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    PubMed

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved

  17. A method for obtaining reduced-order control laws for high-order systems using optimization techniques

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1981-01-01

    A method of synthesizing reduced-order optimal feedback control laws for a high-order system is developed. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs. An analogy with the linear quadractic Gaussian solution is utilized to select a set of design variables and their initial values. To improve the stability margins of the system, an input-noise adjustment procedure is used in the design algorithm. The method is applied to the synthesis of an active flutter-suppression control law for a wind tunnel model of an aeroelastic wing. The reduced-order controller is compared with the corresponding full-order controller and found to provide nearly optimal performance. The performance of the present method appeared to be superior to that of two other control law order-reduction methods. It is concluded that by using the present algorithm, nearly optimal low-order control laws with good stability margins can be synthesized.

  18. Parameter optimization of electrolytic process of obtaining sodium hypochlorite for disinfection of water

    NASA Astrophysics Data System (ADS)

    Bogoslovskii, S. Yu; Kuznetsov, N. N.; Boldyrev, V. S.

    2017-11-01

    Electrochlorination parameters were optimized in flowing and non-flowing modes for a cell with a volume of 1 l. At a current density of 0.1 A/cm2 in the range of flow rates from 0.8 to 6.0 l/h with a temperature of the initial solution below 20°C the outlet temperature is maintained close to the optimal 40°C. The pH of the solution during electrolysis increases to 8.8 ÷ 9.4. There was studied a process in which a solution with a temperature of 7-8°C and a concentration of sodium chloride of 25 and 35 g/l in non-flowing cell was used. The dependence of the concentration of active chlorine on the electrolysis time varies with the concentration of the initial solution of sodium chloride. In case of chloride concentration of 25 g/l virtually linear relationship makes it easy to choose the time of electrolysis with the aim of obtaining the needed concentration of the product.

  19. Optimization of the parameters for obtaining zirconia-alumina coatings, made by flame spraying from results of numerical simulation

    NASA Astrophysics Data System (ADS)

    Ferrer, M.; Vargas, F.; Peña, G.

    2017-12-01

    The K-Sommerfeld values (K) and the melting percentage (% F) obtained by numerical simulation using the Jets et Poudres software were used to find the projection parameters of zirconia-alumina coatings by thermal spraying flame, in order to obtain coatings with good morphological and structural properties to be used as thermal insulation. The experimental results show the relationship between the Sommerfeld parameter and the porosity of the zirconia-alumina coatings. It is found that the lowest porosity is obtained when the K-Sommerfeld value is close to 45 with an oxidant flame, on the contrary, when superoxidant flames are used K values are close 52, which improve wear resistance.

  20. Effects of b-value and number of gradient directions on diffusion MRI measures obtained with Q-ball imaging

    NASA Astrophysics Data System (ADS)

    Schilling, Kurt G.; Nath, Vishwesh; Blaber, Justin; Harrigan, Robert L.; Ding, Zhaohua; Anderson, Adam W.; Landman, Bennett A.

    2017-02-01

    High-angular-resolution diffusion-weighted imaging (HARDI) MRI acquisitions have become common for use with higher order models of diffusion. Despite successes in resolving complex fiber configurations and probing microstructural properties of brain tissue, there is no common consensus on the optimal b-value and number of diffusion directions to use for these HARDI methods. While this question has been addressed by analysis of the diffusion-weighted signal directly, it is unclear how this translates to the information and metrics derived from the HARDI models themselves. Using a high angular resolution data set acquired at a range of b-values, and repeated 11 times on a single subject, we study how the b-value and number of diffusion directions impacts the reproducibility and precision of metrics derived from Q-ball imaging, a popular HARDI technique. We find that Q-ball metrics associated with tissue microstructure and white matter fiber orientation are sensitive to both the number of diffusion directions and the spherical harmonic representation of the Q-ball, and often are biased when under sampled. These results can advise researchers on appropriate acquisition and processing schemes, particularly when it comes to optimizing the number of diffusion directions needed for metrics derived from Q-ball imaging.

  1. Assessing the Value of Information for Identifying Optimal Floodplain Management Portfolios

    NASA Astrophysics Data System (ADS)

    Read, L.; Bates, M.; Hui, R.; Lund, J. R.

    2014-12-01

    Floodplain management is a complex portfolio problem that can be analyzed from an integrated perspective incorporating traditionally structural and nonstructural options. One method to identify effective strategies for preparing, responding to, and recovering from floods is to optimize for a portfolio of temporary (emergency) and permanent floodplain management options. A risk-based optimization approach to this problem assigns probabilities to specific flood events and calculates the associated expected damages. This approach is currently limited by: (1) the assumption of perfect flood forecast information, i.e. implementing temporary management activities according to the actual flood event may differ from optimizing based on forecasted information and (2) the inability to assess system resilience across a range of possible future events (risk-centric approach). Resilience is defined here as the ability of a system to absorb and recover from a severe disturbance or extreme event. In our analysis, resilience is a system property that requires integration of physical, social, and information domains. This work employs a 3-stage linear program to identify the optimal mix of floodplain management options using conditional probabilities to represent perfect and imperfect flood stages (forecast vs. actual events). We assess the value of information in terms of minimizing damage costs for two theoretical cases - urban and rural systems. We use portfolio analysis to explore how the set of optimal management options differs depending on whether the goal is for the system to be risk-adverse to a specified event or resilient over a range of events.

  2. Comparing the rankings obtained from two biodiversity indices: the Fair Proportion Index and the Shapley Value.

    PubMed

    Wicke, Kristina; Fischer, Mareike

    2017-10-07

    The Shapley Value and the Fair Proportion Index of phylogenetic trees have been frequently discussed as prioritization tools in conservation biology. Both indices rank species according to their contribution to total phylogenetic diversity, allowing for a simple conservation criterion. While both indices have their specific advantages and drawbacks, it has recently been shown that both values are closely related. However, as different authors use different definitions of the Shapley Value, the specific degree of relatedness depends on the specific version of the Shapley Value - it ranges from a high correlation index to equality of the indices. In this note, we first give an overview of the different indices. Then we turn our attention to the mere ranking order provided by either of the indices. We compare the rankings obtained from different versions of the Shapley Value for a phylogenetic tree of European amphibians and illustrate their differences. We then undertake further analyses on simulated data and show that even though the chance of two rankings being exactly identical (when obtained from different versions of the Shapley Value) decreases with an increasing number of taxa, the distance between the two rankings converges to zero, i.e., the rankings are becoming more and more alike. Moreover, we introduce our freely available software package FairShapley, which was implemented in Perl and with which all calculations have been performed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast

  4. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  5. Advanced launch system trajectory optimization using suboptimal control

    NASA Technical Reports Server (NTRS)

    Shaver, Douglas A.; Hull, David G.

    1993-01-01

    The maximum-final mass trajectory of a proposed configuration of the Advanced Launch System is presented. A model for the two-stage rocket is given; the optimal control problem is formulated as a parameter optimization problem; and the optimal trajectory is computed using a nonlinear programming code called VF02AD. Numerical results are presented for the controls (angle of attack and velocity roll angle) and the states. After the initial rotation, the angle of attack goes to a positive value to keep the trajectory as high as possible, returns to near zero to pass through the transonic regime and satisfy the dynamic pressure constraint, returns to a positive value to keep the trajectory high and to take advantage of minimum drag at positive angle of attack due to aerodynamic shading of the booster, and then rolls off to negative values to satisfy the constraints. Because the engines cannot be throttled, the maximum dynamic pressure occurs at a single point; there is no maximum dynamic pressure subarc. To test approximations for obtaining analytical solutions for guidance, two additional optimal trajectories are computed: one using untrimmed aerodynamics and one using no atmospheric effects except for the dynamic pressure constraint. It is concluded that untrimmed aerodynamics has a negligible effect on the optimal trajectory and that approximate optimal controls should be able to be obtained by treating atmospheric effects as perturbations.

  6. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  7. The optimal design of UAV wing structure

    NASA Astrophysics Data System (ADS)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  8. Optimization of seismic isolation systems via harmony search

    NASA Astrophysics Data System (ADS)

    Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk

    2014-11-01

    In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.

  9. An Exploratory Study of Value-Added and Academic Optimism of Urban Reading Teachers

    ERIC Educational Resources Information Center

    Huff-Franklin, Clairie L.

    2017-01-01

    The purpose of this study is to explore the correlation between state-recorded value- added (VA) scores and academic optimism (AO) scores, which measure teacher self-efficacy, trust, and academic emphasis. The sample for this study is 87 third through eighth grade Reading teachers, from fifty-five schools, in an urban school district in Ohio who…

  10. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold

  11. Expected p-values in light of an ROC curve analysis applied to optimal multiple testing procedures.

    PubMed

    Vexler, Albert; Yu, Jihnhee; Zhao, Yang; Hutson, Alan D; Gurevich, Gregory

    2017-01-01

    Many statistical studies report p-values for inferential purposes. In several scenarios, the stochastic aspect of p-values is neglected, which may contribute to drawing wrong conclusions in real data experiments. The stochastic nature of p-values makes their use to examine the performance of given testing procedures or associations between investigated factors to be difficult. We turn our focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we prove that the EPV can be considered in the context of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. The ROC-based framework provides a new and efficient methodology for investigating and constructing statistical decision-making procedures, including: (1) evaluation and visualization of properties of the testing mechanisms, considering, e.g. partial EPVs; (2) developing optimal tests via the minimization of EPVs; (3) creation of novel methods for optimally combining multiple test statistics. We demonstrate that the proposed EPV-based approach allows us to maximize the integrated power of testing algorithms with respect to various significance levels. In an application, we use the proposed method to construct the optimal test and analyze a myocardial infarction disease dataset. We outline the usefulness of the "EPV/ROC" technique for evaluating different decision-making procedures, their constructions and properties with an eye towards practical applications.

  12. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  13. Design Optimization of a Hybrid Electric Vehicle Powertrain

    NASA Astrophysics Data System (ADS)

    Mangun, Firdause; Idres, Moumen; Abdullah, Kassim

    2017-03-01

    This paper presents an optimization work on hybrid electric vehicle (HEV) powertrain using Genetic Algorithm (GA) method. It focused on optimization of the parameters of powertrain components including supercapacitors to obtain maximum fuel economy. Vehicle modelling is based on Quasi-Static-Simulation (QSS) backward-facing approach. A combined city (FTP-75)-highway (HWFET) drive cycle is utilized for the design process. Seeking global optimum solution, GA was executed with different initial settings to obtain sets of optimal parameters. Starting from a benchmark HEV, optimization results in a smaller engine (2 l instead of 3 l) and a larger battery (15.66 kWh instead of 2.01 kWh). This leads to a reduction of 38.3% in fuel consumption and 30.5% in equivalent fuel consumption. Optimized parameters are also compared with actual values for HEV in the market.

  14. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  15. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    PubMed

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  16. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another

  17. Thermoelectric generator based on composites obtained by sintering of detonation nanodiamonds

    NASA Astrophysics Data System (ADS)

    Eidelman, E. D.; Meilakhs, A. P.; Semak, B. V.; Shakhov, F. M.

    2017-11-01

    A model of a thermoelectric generator is proposed, in which composite materials obtained by sintering diamond nanoparticles are used as the main component. To increase the useful conversion of heat into electric current, it is proposed to use the effect of electron drag by ballistic phonons. To reduce the ineffective heat spread, it is proposed to use the effect of thermal resistance of the boundaries between the graphite-like and diamond-like phases of the composite. An experimental confirmation of the existence of an optimal volume ratio between graphite-like and diamond-like phases of the composite is predicted and obtained. The highest achieved value of thermoelectric coefficient in the actual structure is 80 µV K-1 (which means 20 times increase compared to that of composites not of the optimal structure), with a thermal conductivity of 50 W m-1 K-1. These results were obtained with constant electrical conductivity. The combined influence of these two effects in case of the ideal composite structure should result in an increase of the thermoelectric efficiency parameter by three orders of magnitude.

  18. Optimal power and efficiency of quantum Stirling heat engines

    NASA Astrophysics Data System (ADS)

    Yin, Yong; Chen, Lingen; Wu, Feng

    2017-01-01

    A quantum Stirling heat engine model is established in this paper in which imperfect regeneration and heat leakage are considered. A single particle which contained in a one-dimensional infinite potential well is studied, and the system consists of countless replicas. Each particle is confined in its own potential well, whose occupation probabilities can be expressed by the thermal equilibrium Gibbs distributions. Based on the Schrödinger equation, the expressions of power output and efficiency for the engine are obtained. Effects of imperfect regeneration and heat leakage on the optimal performance are discussed. The optimal performance region and the optimal values of important parameters of the engine cycle are obtained. The results obtained can provide some guidelines for the design of a quantum Stirling heat engine.

  19. Singular values behaviour optimization in the diagnosis of feed misalignments in radioastronomical reflectors

    NASA Astrophysics Data System (ADS)

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro

    2016-07-01

    The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.

  20. The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization

    NASA Astrophysics Data System (ADS)

    Harper, Colby; Stephenson, Gary

    2007-01-01

    The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.

  1. Optimization of factors to obtain cassava starch films with improved mechanical properties

    NASA Astrophysics Data System (ADS)

    Monteiro, Mayra; Oliveira, Victor; Santos, Francisco; Barros Neto, Eduardo; Silva, Karyn; Silva, Rayane; Henrique, João; Chibério, Abimaelle

    2017-08-01

    In this study, was investigated the optimization of the factors that significantly influenced the mechanical property improvement of cassava starch films through complete factorial design 23. The factors to be analyzed were cassava starch, glycerol and modified clay contents. A regression model was proposed by the factorial analysis, aiming to estimate the condition of the individual factors investigated in the optimum state of the mechanical properties of the biofilm, using the following statistical tool: desirability function and response surface. The response variable that delimits the improvement of the mechanical property of the biofilm is the tensile strength, such improvement is obtained by maximizing the response variable. The factorial analysis showed that the best combination of factor configurations to reach the best response was found to be: with 5g of cassava starch, 10% of glycerol and 5% of modified clay, both percentages in relation to the dry mass of starch used. In addition, the starch biofilm showing the lowest response contained 2g of cassava starch, 0% of modified clay and 30% of glycerol, and was consequently considered the worst biofilm.

  2. Comparison of oxygen saturation values obtained from fingers on physically restrained or unrestrained sides of the body.

    PubMed

    Korhan, Esra Akin; Yönt, Gülendam Hakverdioğlu; Khorshid, Leyla

    2011-01-01

    The aim of this study was to compare semiexperimentally the pulse oximetry values obtained from a finger on restrained or unrestrained sides of the body. The pulse oximeter provides a noninvasive measurement of the oxygen saturation of hemoglobin in arterial blood. One of the procedures most frequently applied to patients in intensive care units is the application of physical restraint. Circulation problems are the most important complication in patients who are physically restrained. Evaluation of oxygen saturation from body parts in which circulation is impeded or has deteriorated can cause false results. The research sample consisted of 30 hospitalized patients who participated in the study voluntarily and who were concordant with the inclusion criteria of the study. Patient information and patient follow-up forms were used for data collection. Pulse oximetry values were measured simultaneously using OxiMax Nellcor finger sensors from fingers on the restrained and unrestrained sides of the body. Numeric and percentile distributions were used in evaluating the sociodemographic properties of patients. A significant difference was found between the oxygen saturation values obtained from a finger of an arm that had been physically restrained and a finger of an arm that had not been physically restrained. The mean oxygen saturation value measured from a finger of an arm that had been physically restrained was found to be 93.40 (SD, 2.97), and the mean oxygen saturation value measured from a finger of an arm that had not been physically restrained was found to be 95.53 (SD, 2.38). The results of this study indicate that nurses should use a finger of an arm that is not physically restrained when evaluating oxygen saturation values to evaluate them correctly.

  3. Optimization of ultrasound-assisted extraction to obtain mycosterols from Agaricus bisporus L. by response surface methodology and comparison with conventional Soxhlet extraction.

    PubMed

    Heleno, Sandrina A; Diz, Patrícia; Prieto, M A; Barros, Lillian; Rodrigues, Alírio; Barreiro, Maria Filomena; Ferreira, Isabel C F R

    2016-04-15

    Ergosterol, a molecule with high commercial value, is the most abundant mycosterol in Agaricus bisporus L. To replace common conventional extraction techniques (e.g. Soxhlet), the present study reports the optimal ultrasound-assisted extraction conditions for ergosterol. After preliminary tests, the results showed that solvents, time and ultrasound power altered the extraction efficiency. Using response surface methodology, models were developed to investigate the favourable experimental conditions that maximize the extraction efficiency. All statistical criteria demonstrated the validity of the proposed models. Overall, ultrasound-assisted extraction with ethanol at 375 W during 15 min proved to be as efficient as the Soxhlet extraction, yielding 671.5 ± 0.5mg ergosterol/100 g dw. However, with n-hexane extracts with higher purity (mg ergosterol/g extract) were obtained. Finally, it was proposed for the removal of the saponification step, which simplifies the extraction process and makes it more feasible for its industrial transference. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume

  5. 7 CFR 356.4 - Property valued at $10,000 or less; notice of seizure administrative action to obtain forfeiture.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 5 2010-01-01 2010-01-01 false Property valued at $10,000 or less; notice of seizure... PROCEDURES § 356.4 Property valued at $10,000 or less; notice of seizure administrative action to obtain... notice of seizure and proposed forfeiture as provided in paragraph (c)(1) of this section, by posting for...

  6. [Optimize preparation of compound licorice microemulsion with D-optimal design].

    PubMed

    Ma, Shu-Wei; Wang, Yong-Jie; Chen, Cheng; Qiu, Yue; Wu, Qing

    2018-03-01

    In order to increase the solubility of essential oil in compound licorice microemulsion and improve the efficacy of the decoction for treating chronic eczema, this experiment intends to prepare the decoction into microemulsion. The essential oil was used as the oil phase of the microemulsion and the extract was used as the water phase. Then the microemulsion area and maximum ratio of water capacity was obtained by plotting pseudo-ternary phase diagram, to determine the appropriate types of surfactant and cosurfactant, and Km value-the mass ratio between surfactant and cosurfactant. With particle size and skin retention of active ingredients as the index, microemulsion prescription was optimized by D-optimal design method, to investigate the in vitro release behavior of the optimized prescription. The results showed that the microemulsion was optimal with tween-80 as the surfactant and anhydrous ethanol as the cosurfactant. When the Km value was 1, the area of the microemulsion region was largest while when the concentration of extract was 0.5 g·mL⁻¹, it had lowest effect on the particle size distribution of microemulsion. The final optimized formulation was as follows: 9.4% tween-80, 9.4% anhydrous ethanol, 1.0% peppermint oil and 80.2% 0.5 g·mL⁻¹ extract. The microemulsion prepared under these conditions had a small viscosity, good stability and high skin retention of drug; in vitro release experiment showed that microemulsion had a sustained-release effect on glycyrrhizic acid and liquiritin, basically achieving the expected purpose of the project. Copyright© by the Chinese Pharmaceutical Association.

  7. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  8. Extraction Optimization for Obtaining Artemisia capillaris Extract with High Anti-Inflammatory Activity in RAW 264.7 Macrophage Cells

    PubMed Central

    Jang, Mi; Jeong, Seung-Weon; Kim, Bum-Keun; Kim, Jong-Chan

    2015-01-01

    Plant extracts have been used as herbal medicines to treat a wide variety of human diseases. We used response surface methodology (RSM) to optimize the Artemisia capillaris Thunb. extraction parameters (extraction temperature, extraction time, and ethanol concentration) for obtaining an extract with high anti-inflammatory activity at the cellular level. The optimum ranges for the extraction parameters were predicted by superimposing 4-dimensional response surface plots of the lipopolysaccharide- (LPS-) induced PGE2 and NO production and by cytotoxicity of A. capillaris Thunb. extracts. The ranges of extraction conditions used for determining the optimal conditions were extraction temperatures of 57–65°C, ethanol concentrations of 45–57%, and extraction times of 5.5–6.8 h. On the basis of the results, a model with a central composite design was considered to be accurate and reliable for predicting the anti-inflammation activity of extracts at the cellular level. These approaches can provide a logical starting point for developing novel anti-inflammatory substances from natural products and will be helpful for the full utilization of A. capillaris Thunb. The crude extract obtained can be used in some A. capillaris Thunb.-related health care products. PMID:26075271

  9. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  10. Optimization of contoured hypersonic scramjet inlets with a least-squares parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Auslender, A. H.

    1993-01-01

    A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.

  11. Open space preservation, property value, and optimal spatial configuration

    Treesearch

    Yong Jiang; Stephen K. Swallow

    2007-01-01

    The public has increasingly demonstrated a strong support for open space preservation. How to finance the socially efficient level of open space with the optimal spatial structure is of high policy relevance to local governments. In this study, we developed a spatially explicit open space model to help identify the socially optimal amount and optimal spatial...

  12. Application of D-optimal experimental design method to optimize the formulation of O/W cosmetic emulsions.

    PubMed

    Djuris, J; Vasiljevic, D; Jokic, S; Ibric, S

    2014-02-01

    This study investigates the application of D-optimal mixture experimental design in optimization of O/W cosmetic emulsions. Cetearyl glucoside was used as a natural, biodegradable non-ionic emulsifier in the relatively low concentration (1%), and the mixture of co-emulsifiers (stearic acid, cetyl alcohol, stearyl alcohol and glyceryl stearate) was used to stabilize the formulations. To determine the optimal composition of co-emulsifiers mixture, D-optimal mixture experimental design was used. Prepared emulsions were characterized with rheological measurements, centrifugation test, specific conductivity and pH value measurements. All prepared samples appeared as white and homogenous creams, except for one homogenous and viscous lotion co-stabilized by stearic acid alone. Centrifugation testing revealed some phase separation only in the case of sample co-stabilized using glyceryl stearate alone. The obtained pH values indicated that all samples expressed mild acid value acceptable for cosmetic preparations. Specific conductivity values are attributed to the multiple phases O/W emulsions with high percentages of fixed water. Results of the rheological measurements have shown that the investigated samples exhibited non-Newtonian thixotropic behaviour. To determine the influence of each of the co-emulsifiers on emulsions properties, the obtained results were evaluated by the means of statistical analysis (ANOVA test). On the basis of comparison of statistical parameters for each of the studied responses, mixture reduced quadratic model was selected over the linear model implying that interactions between co-emulsifiers play the significant role in overall influence of co-emulsifiers on emulsions properties. Glyceryl stearate was found to be the dominant co-emulsifier affecting emulsions properties. Interactions between the glyceryl stearate and other co-emulsifiers were also found to significantly influence emulsions properties. These findings are especially important

  13. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  14. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  15. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  16. Optimal regionalization of extreme value distributions for flood estimation

    NASA Astrophysics Data System (ADS)

    Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.

    2018-01-01

    Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.

  17. On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

    PubMed Central

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  18. Optimization of composite flour biscuits by mixture response surface methodology.

    PubMed

    Okpala, Laura C; Okoli, Eric C

    2013-08-01

    Biscuits were produced from blends of pigeon pea, sorghum and cocoyam flours. The study was carried out using mixture response surface methodology as the optimization technique. Using the simplex centroid design, 10 formulations were obtained. Protein and sensory quality of the biscuits were analyzed. The sensory attributes studied were appearance, taste, texture, crispness and general acceptability, while the protein quality indices were biological value and net protein utilization. The results showed that while the addition of pigeon pea improved the protein quality, its addition resulted in reduced sensory ratings for all the sensory attributes with the exception of appearance. Some of the biscuits had sensory ratings, which were not significantly different (p > 0.05) from biscuits made with wheat. Rat feeding experiments indicated that the biological value and net protein utilization values obtained for most of the biscuits were above minimum recommended values. Optimization suggested biscuits containing 75.30% sorghum, 0% pigeon pea and 24.70% cocoyam flours as the best proportion of these components. This sample received good scores for the sensory attributes.

  19. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  20. Optimal reproducibility of gated sestamibi and thallium myocardial perfusion study left ventricular ejection fractions obtained on a solid-state CZT cardiac camera requires operator input.

    PubMed

    Cherk, Martin H; Ky, Jason; Yap, Kenneth S K; Campbell, Patrina; McGrath, Catherine; Bailey, Michael; Kalff, Victor

    2012-08-01

    To evaluate the reproducibility of serial re-acquisitions of gated Tl-201 and Tc-99m sestamibi left ventricular ejection fraction (LVEF) measurements obtained on a new generation solid-state cardiac camera system during myocardial perfusion imaging and the importance of manual operator optimization of left ventricular wall tracking. Resting blinded automated (auto) and manual operator optimized (opt) LVEF measurements were measured using ECT toolbox (ECT) and Cedars-Sinai QGS software in two separate cohorts of 55 Tc-99m sestamibi (MIBI) and 50 thallium (Tl-201) myocardial perfusion studies (MPS) acquired in both supine and prone positions on a cadmium zinc telluride (CZT) solid-state camera system. Resting supine and prone automated LVEF measurements were similarly obtained in a further separate cohort of 52 gated cardiac blood pool scans (GCBPS) for validation of methodology and comparison. Appropriate use of Bland-Altman, chi-squared and Levene's equality of variance tests was used to analyse the resultant data comparisons. For all radiotracer and software combinations, manual checking and optimization of valve planes (+/- centre radius with ECT software) resulted in significant improvement in MPS LVEF reproducibility that approached that of planar GCBPS. No difference was demonstrated between optimized MIBI/Tl-201 QGS and planar GCBPS LVEF reproducibility (P = .17 and P = .48, respectively). ECT required significantly more manual optimization compared to QGS software in both supine and prone positions independent of radiotracer used (P < .02). Reproducibility of gated sestamibi and Tl-201 LVEF measurements obtained during myocardial perfusion imaging with ECT toolbox or QGS software packages using a new generation solid-state cardiac camera with improved image quality approaches that of planar GCBPS however requires visual quality control and operator optimization of left ventricular wall tracking for best results. Using this superior cardiac technology, Tl-201

  1. Prospective Study of Optimal Obesity Index Cut-Off Values for Predicting Incidence of Hypertension in 18–65-Year-Old Chinese Adults

    PubMed Central

    Ren, Qian; Su, Chang; Wang, Huijun; Wang, Zhihong; Du, Wenwen; Zhang, Bing

    2016-01-01

    Background Overweight and obesity increase the risk of elevated blood pressure; most of the studies that serve as a background for the debates on the optimal obesity index cut-off values used cross-sectional samples. The aim of this study was to determine the cut-off values of anthropometric markers for detecting hypertension in Chinese adults with data from prospective cohort. Methods This study determines the best cut-off values for the obesity indices that represent elevated incidence of hypertension in 18–65-year-old Chinese adults using data from the China Health and Nutrition Survey (CHNS) 2006–2011 prospective cohort. Individual body mass index (BMI), waist circumference (WC), waist:hip ratio (WHR) and waist:stature ratio (WSR) were assessed. ROC curves for these obesity indices were plotted to estimate and compare the usefulness of these obesity indices and the corresponding values for the maximum of the Youden indices were considered the optimal cut-off values. Results Five-year cumulative incidences of hypertension were 21.5% (95% CI: 19.4–23.6) in men and 16.5% (95% CI: 14.7–18.2) in women, and there was a significant trend of increased incidence of hypertension with an increase in BMI, WC, WHR or WSR (P for trend < 0.001) in both men and women. The Youden index indicated that the optimal BMI, WC, WHR, WSR cut-off values were 23.53 kg/m2, 83.7 cm, 0.90, and 0.51 among men. The optimal BMI, WC, WHR, WSR cut-off values were 24.25 kg/m2, 79.9 cm, 0.85 and 0.52 among women. Conclusions Our study supported the hypothesis that the cut-off values for BMI and WC that were recently developed by the Working Group on Obesity in China (WGOC), the cut-off values for WHR that were developed by the World Health Organization (WHO), and a global WSR cut-off value of 0.50 may be the appropriate upper limits for Chinese adults. PMID:26934390

  2. Gadolinium sulfate modified by formate to obtain optimized magneto-caloric effect.

    PubMed

    Xu, Long-Yang; Zhao, Jiong-Peng; Liu, Ting; Liu, Fu-Chen

    2015-06-01

    Three new Gd(III) based coordination polymers [Gd2(C2H6SO)(SO4)3(H2O)2]n (1), {[Gd4(HCOO)2(SO4)5(H2O)6]·H2O}n (2), and [Gd(HCOO)(SO4)(H2O)]n (3) were obtained by modifying gadolinium sulfate. With the gradual increase of the volume ratio of HCOOH and DMSO in synthesis, the formate anions begin to coordinate with metal centers; this results in the coordination numbers of sulfate anion increasing and the contents of water and DMSO molecules decreasing in target complexes. Accordingly, spin densities both per mass and per volume were enhanced step by step, which are beneficial for the magneto-caloric effect (MCE). Magnetic studies reveal that with the more formate anions present, the larger the negative value of magnetic entropy change (-ΔSm) is. Complex 3 exhibits the largest -ΔSm = 49.91 J kg(-1) K(-1) (189.51 mJ cm(-3) K(-1)) for T = 2 K and ΔH = 7 T among three new complexes.

  3. What is the optimal cutoff value of the axis-line-angle technique for evaluating trunk imbalance in coronal plane?

    PubMed

    Zhang, Rui-Fang; Fu, Yu-Chuan; Lu, Yi; Zhang, Xiao-Xia; Hu, Yu-Min; Zhou, Yong-Jin; Tian, Nai-Feng; He, Jia-Wei; Yan, Zhi-Han

    2017-02-01

    Accurately evaluating the extent of trunk imbalance in the coronal plane is significant for patients before and after treatment. We preliminarily practiced a new method, axis-line-angle technique (ALAT), for evaluating coronal trunk imbalance with excellent intra-observer and interobserver reliability. Radiologists and surgeons were encouraged to use this method in clinical practice. However, the optimal cutoff value of the ALAT for determination of the extent of coronal trunk imbalance has not been calculated up to now. The purpose of this study was to identify the cutoff value of the ALAT that best predicts a positive measurement point to assess coronal balance or imbalance. A retrospective study at a university affiliated hospital was carried out. A total of 130 patients with C7-central sacral vertical line (CSVL) >0 mm and aged 10-18 years were recruited in this study from September 2013 to December 2014. Data were analyzed to determine the optimal cutoff value of the ALAT measurement. The C7-CSVL and ALAT measurements were conducted respectively twice on plain film within a 2-week interval by two radiologists. The optimal cutoff value of the ALAT was analyzed via receiver operating characteristic (ROC) curve. Comparison variables were performed with chi-square test between the C7-CSVL and ALAT measurements for evaluating trunk imbalance. Kappa agreement coefficient method was used to test the intra-observer and interobserver agreement of C7-CSVL and ALAT. The ROC curve area for the ALAT was 0.82 (95% confidence interval: 0.753-0.894, p<.001). The maximum Youden index was 0.51, and the corresponding cutoff point was 2.59°. No statistical difference was found between the C7-CSVL and ALAT measurements for evaluating trunk imbalance (p>.05). Intra-observer agreement values for the C7-CSVL measurements by observers 1 and 2 were 0.79 and 0.91 (p<.001), respectively, whereas intra-observer agreement values for the ALAT measurements were both 0.89 by observers 1

  4. Optimization of cutting parameters for machining time in turning process

    NASA Astrophysics Data System (ADS)

    Mavliutov, A. R.; Zlotnikov, E. G.

    2018-03-01

    This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.

  5. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  6. Optimization of thermal processing of canned mussels.

    PubMed

    Ansorena, M R; Salvadori, V O

    2011-10-01

    The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.

  7. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  8. The multi-criteria optimization for the formation of the multiple-valued logic model of a robotic agent

    NASA Astrophysics Data System (ADS)

    Bykovsky, A. Yu; Sherbakov, A. A.

    2016-08-01

    The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation.

  9. Quantum dot ternary-valued full-adder: Logic synthesis by a multiobjective design optimization based on a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be

    2014-10-28

    A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less

  10. A New Algorithm to Optimize Maximal Information Coefficient

    PubMed Central

    Luo, Feng; Yuan, Zheming

    2016-01-01

    The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001

  11. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  12. Vocational Rehabilitation Partnerships: Optimizing the Social Value Return on Investment of Employment Outcomes for People with Disabilities

    ERIC Educational Resources Information Center

    Ramos-Olszowy, Lorraine Florence

    2011-01-01

    This applied research project was developed to examine the social value return of investment (SV-ROI) of a community rehabilitation provider (CRP) in order to identify services that may optimize employment outcomes, better understand the associated factors affecting the employment outcomes and retention, and explore how vocational rehabilitation…

  13. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  14. On optimizing the treatment of exchange perturbations.

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Chipman, D. M.

    1972-01-01

    Most theories of exchange perturbations would give the exact energy and wave function if carried out to an infinite order. However, the different methods give different values for the second-order energy, and different values for E(1), the expectation value of the Hamiltonian corresponding to the zeroth- plus first-order wave function. In the presented paper, it is shown that the zeroth- plus first-order wave function obtained by optimizing the basic equation which is used in most exchange perturbation treatments is the exact wave function for the perturbation system and E(1) is the exact energy.

  15. Multi Objective Controller Design for Linear System via Optimal Interpolation

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay

    1996-01-01

    We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.

  16. Rambutan Seed (Nephelium Lappaceum L.) Optimization as Raw Material of High Nutrition Value Processed Food

    NASA Astrophysics Data System (ADS)

    Wahini, M.; Miranti, M. G.; Lukitasari, F.; Novela, L.

    2018-02-01

    Rambutan (Nephelium Lappaceum L.) is a plant that identical with Southeast Asian countries, in some areas of Indonesia no exception, but rambutan seed is considered as a waste. Therefore, it needs to be optimized into raw materials of food and processed with high nutritional value and has economic value. The purpose of this research were: 1) to find the best rambutan seed immersion formula; 2) to know the nutritional value of the best immersed rambutan seed; 3) to produce raw material and various processed of rambutan seed product. The research method was quasi experiment with 6 treatments and 2 factorial design, materials for immersion was NaCl and Ca(OH)2. The results showed that: 1) the best rambutan seed immersion formula was using Ca(OH)2; 2) the best rambutan seed contains 1,6 ash, 31,2 protein, 26,9 fat; 3) the best rambutan seed produce flour and processed of seasoned nuts. This research indicates that rambutan seed is very potential to be an alternative high-value raw materials.

  17. Parametric optimization of optical signal detectors employing the direct photodetection scheme

    NASA Astrophysics Data System (ADS)

    Kirakosiants, V. E.; Loginov, V. A.

    1984-08-01

    The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.

  18. Optimization of Nanocomposite Modified Asphalt Mixtures Fatigue Life using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Bala, N.; Napiah, M.; Kamaruddin, I.; Danlami, N.

    2018-04-01

    In this study, modelling and optimization of materials polyethylene, polypropylene and nanosilica for nanocomposite modified asphalt mixtures has been examined to obtain optimum quantities for higher fatique life. Response Surface Methodology (RSM) was applied for the optimization based on Box Behnken design (BBD). Interaction effects of independent variables polymers and nanosilica on fatique life were evaluated. The result indicates that the individual effects of polymers and nanosilica content are both important. However, the content of nanosilica used has more significant effect on fatique life resistance. Also, the mean error obtained from optimization results is less than 5% for all the responses, this indicates that predicted values are in agreement with experimental results. Furthermore, it was concluded that asphalt mixture design with high performance properties, optimization using RSM is a very effective approach.

  19. Inverse Modelling to Obtain Head Movement Controller Signal

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Lee, S. H.; Hannaford, B.; Stark, L.

    1984-01-01

    Experimentally obtained dynamics of time-optimal, horizontal head rotations have previously been simulated by a sixth order, nonlinear model driven by rectangular control signals. Electromyography (EMG) recordings have spects which differ in detail from the theoretical rectangular pulsed control signal. Control signals for time-optimal as well as sub-optimal horizontal head rotations were obtained by means of an inverse modelling procedures. With experimentally measured dynamical data serving as the input, this procedure inverts the model to produce the neurological control signals driving muscles and plant. The relationships between these controller signals, and EMG records should contribute to the understanding of the neurological control of movements.

  20. Reference values assessment in a Mediterranean population for small dense low-density lipoprotein concentration isolated by an optimized precipitation method.

    PubMed

    Fernández-Cidón, Bárbara; Padró-Miquel, Ariadna; Alía-Ramos, Pedro; Castro-Castro, María José; Fanlo-Maresma, Marta; Dot-Bach, Dolors; Valero-Politi, José; Pintó-Sala, Xavier; Candás-Estébanez, Beatriz

    2017-01-01

    High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c) particles are associated with risk of cardiovascular disease (CVD). Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg 2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c) from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation). Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. The Passing-Bablok regression equation is y = 1.52 (0.72 to 1.73) + 0.07 x (-0.1 to 0.13), demonstrating no significant statistical differences between the modified precipitation method and the ultracentrifugation reference method. Similarly, no differences were detected when considering only sd-LDL-c from dyslipidemic patients, since the modifications added to the precipitation method facilitated the proper sedimentation of triglycerides and other lipoproteins. The reference interval for sd-LDL-c concentration estimated in a Mediterranean population was 0.04-0.47 mmol/L. An optimization of the heparin-Mg 2+ precipitation method for sd-LDL particle isolation was performed, and reference intervals were established in a Spanish Mediterranean population. Measured values were equivalent to those obtained with the reference method, assuring its clinical application when tested in both normolipidemic and dyslipidemic

  1. 7 CFR 356.4 - Property valued at $10,000 or less; notice of seizure administrative action to obtain forfeiture.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 5 2012-01-01 2012-01-01 false Property valued at $10,000 or less; notice of seizure administrative action to obtain forfeiture. 356.4 Section 356.4 Agriculture Regulations of the Department of Agriculture (Continued) ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FORFEITURE...

  2. Shape optimization of road tunnel cross-section by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej; Pachnicz, Michał

    2016-06-01

    The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.

  3. Optimization of the recovery of high-value compounds from pitaya fruit by-products using microwave-assisted extraction.

    PubMed

    Ferreres, Federico; Grosso, Clara; Gil-Izquierdo, Angel; Valentão, Patrícia; Mota, Ana T; Andrade, Paula B

    2017-09-01

    A green microwave-assisted extraction of high value-added compounds from exotic fruits' peels was optimized by Box-Behnken design using 3 factors: solid/solvent ratio, X 1 , temperature, X 2 , and extraction time, X 3 . By using Derringer's desirability function, optimum extraction yields are obtained with X 1 =1/149.95g/mL, X 2 =72.27°C and X 3 =39.39min (white-fleshed red pitaya) and X 1 =1/148.96g/mL, X 2 =72.56°C and X 3 =5.02min (yellow pitaya) and a maximum betacyanin content is achieved with X 1 =1/150g/mL, X 2 =49.33°C and X 3 =5min. None of the factors influenced the extraction of phenolic compounds. Eighteen cinnamoyl derivatives, 17 flavonoid derivatives and 4 betacyanins were identified by HPLC-DAD-ESI/MS n , 23 and 15 new compounds being described in yellow and white-fleshed red pitayas, respectively. These results indicate that it is possible to reuse these by-products to recover compounds for food and pharmaceutical industries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Analysis and optimization of population annealing

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    2018-03-01

    Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.

  5. Optimization of β-cyclodextrin-based flavonol extraction from apple pomace using response surface methodology.

    PubMed

    Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha

    2015-04-01

    The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.

  6. Lean processes for optimizing OR capacity utilization: prospective analysis before and after implementation of value stream mapping (VSM).

    PubMed

    Schwarz, Patric; Pannes, Klaus Dieter; Nathan, Michel; Reimer, Hans Jorg; Kleespies, Axel; Kuhn, Nicole; Rupp, Anne; Zügel, Nikolaus Peter

    2011-10-01

    The decision to optimize the processes in the operating tract was based on two factors: competition among clinics and a desire to optimize the use of available resources. The aim of the project was to improve operating room (OR) capacity utilization by reduction of change and throughput time per patient. The study was conducted at Centre Hospitalier Emil Mayrisch Clinic for specialized care (n = 618 beds) Luxembourg (South). A prospective analysis was performed before and after the implementation of optimized processes. Value stream analysis and design (value stream mapping, VSM) were used as tools. VSM depicts patient throughput and the corresponding information flows. Furthermore it is used to identify process waste (e.g. time, human resources, materials, etc.). For this purpose, change times per patient (extubation of patient 1 until intubation of patient 2) and throughput times (inward transfer until outward transfer) were measured. VSM, change and throughput times for 48 patient flows (VSM A(1), actual state = initial situation) served as the starting point. Interdisciplinary development of an optimized VSM (VSM-O) was evaluated. Prospective analysis of 42 patients (VSM-A(2)) without and 75 patients (VSM-O) with an optimized process in place were conducted. The prospective analysis resulted in a mean change time of (mean ± SEM) VSM-A(2) 1,507 ± 100 s versus VSM-O 933 ± 66 s (p < 0.001). The mean throughput time VSM-A(2) (mean ± SEM) was 151 min (±8) versus VSM-O 120 min (±10) (p < 0.05). This corresponds to a 23% decrease in waiting time per patient in total. Efficient OR capacity utilization and the optimized use of human resources allowed an additional 1820 interventions to be carried out per year without any increase in human resources. In addition, perioperative patient monitoring was increased up to 100%.

  7. Value Engineering. "A Working Tool for Cost Control in the Design of Educational Facilities."

    ERIC Educational Resources Information Center

    Lawrence, Jerry

    Value Engineering (VE) is a cost optimizing technique used to analyze design quality and cost-effectiveness. The application of VE procedures to the design and construction of school facilities has been adopted by the state of Washington. By using VE, the optimum value for every life cycle dollar spent on a facility is obtained by identifying not…

  8. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  9. Determination of the Spatial Distribution in Hydraulic Conductivity Using Genetic Algorithm Optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Lee, J. H.; Kitanidis, P. K.

    2016-12-01

    Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.

  10. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  11. Simulation and optimization of faceted structure for illumination

    NASA Astrophysics Data System (ADS)

    Liu, Lihong; Engel, Thierry; Flury, Manuel

    2016-04-01

    The re-direction of incoherent light using a surface containing only facets with specific angular values is proposed. A new photometric approach is adopted since the size of each facet is large in comparison with the wavelength. A reflective configuration is employed to avoid the dispersion problems of materials. The irradiance distribution of the reflected beam is determined by the angular position of each facet. In order to obtain the specific irradiance distribution, the angular position of each facet is optimized using Zemax OpticStudio 15 software. A detector is placed in the direction which is perpendicular to the reflected beam. According to the incoherent irradiance distribution on the detector, a merit function needs to be defined to pilot the optimization process. The two dimensional angular position of each facet is defined as a variable which is optimized within a specified varying range. Because the merit function needs to be updated, a macro program is carried out to update this function within Zemax. In order to reduce the complexity of the manual operation, an automatic optimization approach is established. Zemax is in charge of performing the optimization task and sending back the irradiance data to Matlab for further analysis. Several simulation results are given for the verification of the optimization method. The simulation results are compared to those obtained with the LightTools software in order to verify our optimization method.

  12. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  13. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  14. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    PubMed Central

    Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-01-01

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893

  15. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    PubMed

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  16. Optimal Cutoff Values of WHO-HPQ Presenteeism Scores by ROC Analysis for Preventing Mental Sickness Absence in Japanese Prospective Cohort

    PubMed Central

    Suzuki, Tomoko; Miyaki, Koichi; Sasaki, Yasuharu; Song, Yixuan; Tsutsumi, Akizumi; Kawakami, Norito; Shimazu, Akihito; Takahashi, Masaya; Inoue, Akiomi; Kurioka, Sumiko; Shimbo, Takuro

    2014-01-01

    Objectives Sickness absence due to mental disease in the workplace has become a global public health problem. Previous studies report that sickness presenteeism is associated with sickness absence. We aimed to determine optimal cutoff scores for presenteeism in the screening of the future absences due to mental disease. Methods A prospective study of 2195 Japanese employees from all areas of Japan was conducted. Presenteeism and depression were measured by the validated Japanese version of the World Health Organization Health and Work Performance Questionnaire (WHO-HPQ) and K6 scale, respectively. Absence due to mental disease across a 2-year follow-up was surveyed using medical certificates obtained for work absence. Socioeconomic status was measured via a self-administered questionnaire. Receiver operating curve (ROC) analysis was used to determine optimal cutoff scores for absolute and relative presenteeism in relation to the area under the curve (AUC), sensitivity, and specificity. Results The AUC values for absolute and relative presenteeism were 0.708 (95% CI, 0.618–0.797) and 0.646 (95% CI, 0.546–0.746), respectively. Optimal cutoff scores of absolute and relative presenteeism were 40 and 0.8, respectively. With multivariate adjustment, cohort participants with our proposal cutoff scores for absolute and relative presenteeism were significantly more likely to be absent due to mental disease (OR = 4.85, 95% CI: 2.20–10.73 and OR = 5.37, 95% CI: 2.42–11.93, respectively). The inclusion or exclusion of depressive symptoms (K6≥13) at baseline in the multivariate adjustment did not influence the results. Conclusions Our proposed optimal cutoff scores of absolute and relative presenteeism are 40 and 0.8, respectively. Participants who scored worse than the cutoff scores for presenteeism were significantly more likely to be absent in future because of mental disease. Our findings suggest that the utility of presenteeism in the screening of

  17. Value management: optimizing quality, service, and cost.

    PubMed

    Makadon, Harvey J; Bharucha, Farzan; Gavin, Michael; Oliveira, Jason; Wietecha, Mark

    2010-01-01

    Hospitals have wrestled with balancing quality, service, and cost for years--and the visibility and urgency around measuring and communicating real metrics has grown exponentially in the last decade. However, even today, most hospital leaders cannot articulate or demonstrate the "value" they provide to patients and payers. Instead of developing a strategic direction that is based around a core value proposition, they focus their strategic efforts on tactical decisions like physician recruitment, facility expansion, and physician alignment. In the healthcare paradigm of the next decade, alignment of various tactical initiatives will require a more coherent understanding of the hospital's core value positioning. The authors draw on their experience in a variety of healthcare settings to suggest that for most hospitals, quality (i.e., clinical outcomes and patient safety) will become the most visible indicator of value, and introduce a framework to help healthcare providers influence their value positioning based on this variable.

  18. Development and validation of optimal cut-off value in inter-arm systolic blood pressure difference for prediction of cardiovascular events.

    PubMed

    Hirono, Akira; Kusunose, Kenya; Kageyama, Norihito; Sumitomo, Masayuki; Abe, Masahiro; Fujinaga, Hiroyuki; Sata, Masataka

    2018-01-01

    An inter-arm systolic blood pressure difference (IAD) is associated with cardiovascular disease. The aim of this study was to develop and validate the optimal cut-off value of IAD as a predictor of major adverse cardiac events in patients with arteriosclerosis risk factors. From 2009 to 2014, 1076 patients who had at least one cardiovascular risk factor were included in the analysis. We defined 700 randomly selected patients as a development cohort to confirm that IAD was the predictor of cardiovascular events and to determine optimal cut-off value of IAD. Next, we validated outcomes in the remaining 376 patients as a validation cohort. The blood pressure (BP) of both arms measurements were done simultaneously using the ankle-brachial blood pressure index (ABI) form of automatic device. The primary endpoint was the cardiovascular event and secondary endpoint was the all-cause mortality. During a median period of 2.8 years, 143 patients reached the primary endpoint in the development cohort. In the multivariate Cox proportional hazards analysis, IAD was the strong predictor of cardiovascular events (hazard ratio: 1.03, 95% confidence interval: 1.01-1.05, p=0.005). The receiver operating characteristic curve revealed that 5mmHg was the optimal cut-off point of IAD to predict cardiovascular events (p<0.001). In the validation cohort, the presence of a large IAD (IAD ≥5mmHg) was significantly associated with the primary endpoint (p=0.021). IAD is significantly associated with future cardiovascular events in patients with arteriosclerosis risk factors. The optimal cut-off value of IAD is 5mmHg. Copyright © 2017 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  19. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    NASA Astrophysics Data System (ADS)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  20. Reinforcement interval type-2 fuzzy controller design by online rule generation and q-value-aided ant colony optimization.

    PubMed

    Juang, Chia-Feng; Hsu, Chia-Hung

    2009-12-01

    This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.

  1. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  2. Issues and Strategies in Solving Multidisciplinary Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya

    2013-01-01

    merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  3. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of

  4. Design optimization for permanent magnet machine with efficient slot per pole ratio

    NASA Astrophysics Data System (ADS)

    Potnuru, Upendra Kumar; Rao, P. Mallikarjuna

    2018-04-01

    This paper presents a methodology for the enhancement of a Brush Less Direct Current motor (BLDC) with 6Poles and 8slots. In particular; it is focused on amulti-objective optimization using a Genetic Algorithmand Grey Wolf Optimization developed in MATLAB. The optimization aims to maximize the maximum output power value and minimize the total losses of a motor. This paper presents an application of the MATLAB optimization algorithms to brushless DC (BLDC) motor design, with 7 design parameters chosen to be free. The optimal design parameters of the motor derived by GA are compared with those obtained by Grey Wolf Optimization technique. A comparative report on the specified enhancement approaches appearsthat Grey Wolf Optimization technique has a better convergence.

  5. Utilization of coffee by-products obtained from semi-washed process for production of value-added compounds.

    PubMed

    Bonilla-Hermosa, Verónica Alejandra; Duarte, Whasley Ferreira; Schwan, Rosane Freitas

    2014-08-01

    The semi-dry processing of coffee generates significant amounts of coffee pulp and wastewater. This study evaluated the production of bioethanol and volatile compounds of eight yeast strains cultivated in a mixture of these residues. Hanseniaspora uvarum UFLA CAF76 showed the best fermentation performance; hence it was selected to evaluate different culture medium compositions and inoculum size. The best results were obtained with 12% w/v of coffee pulp, 1 g/L of yeast extract and 0.3 g/L of inoculum. Using these conditions, fermentation in 1 L of medium was carried out, achieving higher ethanol yield, productivity and efficiency with values of 0.48 g/g, 0.55 g/L h and 94.11% respectively. Twenty-one volatile compounds corresponding to higher alcohols, acetates, terpenes, aldehydes and volatile acids were identified by GC-FID. Such results indicate that coffee residues show an excellent potential as substrates for production of value-added compounds. H. uvarum demonstrated high fermentative capacity using these residues. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Choosing the optimal Pareto composition of the charge material for the manufacture of composite blanks

    NASA Astrophysics Data System (ADS)

    Zalazinsky, A. G.; Kryuchkov, D. I.; Nesterenko, A. V.; Titov, V. G.

    2017-12-01

    The results of an experimental study of the mechanical properties of pressed and sintered briquettes consisting of powders obtained from a high-strength VT-22 titanium alloy by plasma spraying with additives of PTM-1 titanium powder obtained by the hydride-calcium method and powder of PV-N70Yu30 nickel-aluminum alloy are presented. The task is set for the choice of an optimal charge material composition of a composite material providing the required mechanical characteristics and cost of semi-finished products and items. Pareto optimal values for the composition of the composite material charge have been obtained.

  7. Topology optimization of two-dimensional elastic wave barriers

    NASA Astrophysics Data System (ADS)

    Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G.

    2016-08-01

    Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed.

  8. Optimization of exposure index values for the antero-posterior pelvis and antero-posterior knee examination

    NASA Astrophysics Data System (ADS)

    Butler, M. L.; Rainford, L.; Last, J.; Brennan, P. C.

    2009-02-01

    Introduction The American Association of Medical Physicists is currently standardizing the exposure index (EI) value. Recent studies have questioned whether the EI value offered by manufacturers is optimal. This current work establishes optimum EIs for the antero-posterior (AP) projections of a pelvis and knee on a Carestream Health (Kodak) CR system and compares these with manufacturers recommended EI values from a patient dose and image quality perspective. Methodology Human cadavers were used to produce images of clinically relevant standards. Several exposures were taken to achieve various EI values and corresponding entrance surface doses (ESD) were measured using thermoluminescent dosimeters. Image quality was assessed by 5 experienced clinicians using anatomical criteria judged against a reference image. Visualization of image specific common abnormalities was also analyzed to establish diagnostic efficacy. Results A rise in ESD for both examinations, consistent with increasing EI was shown. Anatomic image quality was deemed to be acceptable at an EI of 1560 for the AP pelvis and 1590 for the AP knee. From manufacturers recommended values, a significant reduction in ESD (p=0.02) of 38% and 33% for the pelvis and knee respectively was noted. Initial pathological analysis suggests that diagnostic efficacy at lower EI values may be projection-specific. Conclusion The data in this study emphasize the need for clinical centres to consider establishing their own EI guidelines, and not necessarily relying on manufacturers recommendations. Normal and abnormal images must be used in this process.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  10. 7 CFR 356.3 - Property valued at greater than $10,000; notice of seizure and civil action to obtain forfeiture.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 5 2012-01-01 2012-01-01 false Property valued at greater than $10,000; notice of seizure and civil action to obtain forfeiture. 356.3 Section 356.3 Agriculture Regulations of the Department of Agriculture (Continued) ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...

  11. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  12. Rats behave optimally in a sunk cost task.

    PubMed

    Yáñez, Nataly; Bouzas, Arturo; Orduña, Vladimir

    2017-07-01

    The sunk cost effect has been defined as the tendency to persist in an alternative once an investment of effort, time or money has been made, even if better options are available. The goal of this study was to investigate in rats the relationship between sunk cost and the information about when it is optimal to leave the situation, which was studied by Navarro and Fantino (2005) with pigeons. They developed a procedure in which different fixed-ratio schedules were randomly presented, with the richest one being more likely; subjects could persist in the trial until they obtained the reinforcer, or start a new trial in which the most favorable option would be available with a high probability. The information about the expected number of responses needed to obtain the reinforcer was manipulated through the presence or absence of discriminative stimuli; also, they used different combinations of schedule values and their probabilities of presentation to generate escape-optimal and persistence- optimal conditions. They found optimal behavior in the conditions with presence of discriminative stimuli, but non-optimal behavior when they were absent. Unlike their results, we found optimal behavior in both conditions regardless of the absence of discriminative stimuli; rats seemed to use the number of responses already emitted in the trial as a criterion to escape. In contrast to pigeons, rats behaved optimally and the sunk cost effect was not observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Optimal path planning for video-guided smart munitions via multitarget tracking

    NASA Astrophysics Data System (ADS)

    Borkowski, Jeffrey M.; Vasquez, Juan R.

    2006-05-01

    An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.

  14. Optimization of Pressurized Liquid Extraction of Three Major Acetophenones from Cynanchum bungei Using a Box-Behnken Design

    PubMed Central

    Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui

    2012-01-01

    In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079

  15. Identifying an optimal cutpoint value for the diagnosis of hypertriglyceridemia in the nonfasting state

    PubMed Central

    White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia

    2015-01-01

    Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491

  16. Combined control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.

    1989-01-01

    An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.

  17. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  18. The Defense Logistics Agency Properly Awarded Power Purchase Agreements and the Army Obtained Fair Market Value for Leases Supporting Power Purchase Agreements

    DTIC Science & Technology

    2016-09-28

    Fair Market Value for Leases Supporting Power Purchase Agreements I N T E G R I T Y  E F F I C I E N C Y  A C C O U N T A B I L I T Y  E X... Market Value for Leases Supporting Power Purchase Agreements Visit us at www.dodig.mil September 28, 2016 Objective We determined whether the...Department of the Army properly awarded and obtained fair market value for leases supporting energy production projects. We conducted this audit in

  19. Antioxidant Compound Extraction from Maqui (Aristotelia chilensis [Mol] Stuntz) Berries: Optimization by Response Surface Methodology

    PubMed Central

    Quispe-Fuentes, Issis; Vega-Gálvez, Antonio; Campos-Requena, Víctor H.

    2017-01-01

    The optimum conditions for the antioxidant extraction from maqui berry were determined using a response surface methodology. A three level D-optimal design was used to investigate the effects of three independent variables namely, solvent type (methanol, acetone and ethanol), solvent concentration and extraction time over total antioxidant capacity by using the oxygen radical absorbance capacity (ORAC) method. The D-optimal design considered 42 experiments including 10 central point replicates. A second-order polynomial model showed that more than 89% of the variation is explained with a satisfactory prediction (78%). ORAC values are higher when acetone was used as a solvent at lower concentrations, and the extraction time range studied showed no significant influence on ORAC values. The optimal conditions for antioxidant extraction obtained were 29% of acetone for 159 min under agitation. From the results obtained it can be concluded that the given predictive model describes an antioxidant extraction process from maqui berry.

  20. Comparison and Optimization of 3.0 T Breast Images Quality of Diffusion-Weighted Imaging with Multiple B-Values.

    PubMed

    Han, Xiaowei; Li, Junfeng; Wang, Xiaoyi

    2017-04-01

    Breast 3.0 T magnetic resonance diffusion-weighted imaging (MR-DWI) of benign and malignant lesions were obtained to measure and calculate the signal-to-noise ratio (SNR), signal intensity ratio (SIR), and contrast-to-noise ratio (CNR) of lesions at different b-values. The variation patterns of SNR and SIR were analyzed with different b-values and the images of DWI were compared at four different b-values with higher image quality. The effect of SIR on the differential diagnostic efficiency of benign and malignant lesions was compared using receiver operating characteristic curves to provide a reference for selecting the optimal b-value. A total of 96 qualified patients with 112 lesions and 14 patients with their contralateral 14 normal breasts were included in this study. The single-shot echo planar imaging sequence was used to perform the DWI and a total of 13 b-values were used: 0, 50, 100, 200, 400, 600, 800, 1000, 1200, 1500, 1800, 2000, and 2500 s/mm 2 . On DWI, the suitable regions of interest were selected. The SNRs of normal breasts (SNR normal ), SNR lesions , SIR, and CNR of benign and malignant lesions were measured on DWI with different b-values and calculated. The variation patterns of SNR, SIR, and CNR values on DWI for normal breasts, benign lesions, and malignant lesions with different b-values were analyzed by using Pearson correlation analysis. The SNR and SIR of benign and malignant lesions with the same b-values were compared using t-tests. The diagnostic efficiencies of SIR with different b-values for benign and malignant lesions were evaluated using receiver operating characteristic curves. Breast DWI had higher CNR for b-values ranging from 600 to 1200 s/mm 2 . It had the best CNR at b = 1000 s/mm 2 for the benign lesions and at b = 1200 s/mm 2 for the malignant lesions. The signal intensity and SNR values of normal breasts decreased with increasing b-values, with a negative correlation (r = -0.945, P < 0.01). The

  1. Optimal solution of full fuzzy transportation problems using total integral ranking

    NASA Astrophysics Data System (ADS)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  2. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    PubMed

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  3. The analytical approach to optimization of active region structure of quantum dot laser

    NASA Astrophysics Data System (ADS)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-10-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.

  4. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  6. Low-cost production of 6G-fructofuranosidase with high value-added astaxanthin by Xanthophyllomyces dendrorhous.

    PubMed

    Ning, Yawei; Li, Qiang; Chen, Feng; Yang, Na; Jin, Zhengyu; Xu, Xueming

    2012-01-01

    The effects of medium composition and culture conditions on the production of (6)G-fructofuranosidase with value-added astaxanthin were investigated to reduce the capital cost of neo-fructooligosaccharides (neo-FOS) production by Xanthophyllomyces dendrorhous. The sucrose and corn steep liquor (CSL) were found to be the optimal carbon source and nitrogen source, respectively. CSL and initial pH were selected as the critical factors using Plackett-Burman design. Maximum (6)G-fructofuranosidase 242.57 U/mL with 5.23 mg/L value-added astaxanthin was obtained at CSL 52.5 mL/L and pH 7.89 by central composite design. Neo-FOS yield could reach 238.12 g/L under the optimized medium conditions. Cost analysis suggested 66.3% of substrate cost was reduced compared with that before optimization. These results demonstrated that the optimized medium and culture conditions could significantly enhance the production of (6)G-fructofuranosidase with value-added astaxanthin and remarkably decrease the substrate cost, which opened up possibilities to produce neo-FOS industrially. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  8. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  9. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  10. The anaerobic threshold: over-valued or under-utilized? A novel concept to enhance lipid optimization!

    PubMed

    Connolly, Declan A J

    2012-09-01

    The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.

  11. Tractable Pareto Optimization of Temporal Preferences

    NASA Technical Reports Server (NTRS)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  12. Analysis of Calorific Value of Tibarau Cane Briquette

    NASA Astrophysics Data System (ADS)

    Nurdin, H.; Hasanuddin, H.; Darmawi, D.; Prasetya, F.

    2018-04-01

    The development of product diversification through tibarau cane briquettes as an effort in obtaining alternative fuels. Tibarau cane is one of the potential materials of renewable energy sources that can be processed into briquette. So as to reduce dependence on energy fuel oil, which for the middle to lower class is the main requirement. Efforts and innovations tibarau cane briquettes in producing fuel that has quality and performance can be measured with calorific value. Prior to development of this potential required the existence of test and evaluation stages according to the order of the flow of new material product development. Through process technology of briquette product making with compaction and optimization of composition content on tapioca adhesive and mesh particles suitable to get optimum calorific value. The results obtained in this research are the development of tibarau cane briquette model which is recommended as replacement fuel. Where the calorific value of tibarau cane briquette is 11.221,72 kJ / kg at composition percentage 80: 20 and its density is 0,565 gr/cm3. The comparison of mass tibarau with tapioca, particle size, pressure force (compaction), can affect the calorific value and density of tibarau cane briquette.

  13. A multi-SNP association test for complex diseases incorporating an optimal P-value threshold algorithm in nuclear families.

    PubMed

    Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua

    2015-05-15

    Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.

  14. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  15. Low b-value diffusion-weighted cardiac magnetic resonance imaging: initial results in humans using an optimal time-window imaging approach.

    PubMed

    Rapacchi, Stanislas; Wen, Han; Viallon, Magalie; Grenier, Denis; Kellman, Peter; Croisille, Pierre; Pai, Vinay M

    2011-12-01

    Diffusion-weighted imaging (DWI) using low b-values permits imaging of intravoxel incoherent motion in tissues. However, low b-value DWI of the human heart has been considered too challenging because of additional signal loss due to physiological motion, which reduces both signal intensity and the signal-to-noise ratio (SNR). We address these signal loss concerns by analyzing cardiac motion during a heartbeat to determine the time-window during which cardiac bulk motion is minimal. Using this information to optimize the acquisition of DWI data and combining it with a dedicated image processing approach has enabled us to develop a novel low b-value diffusion-weighted cardiac magnetic resonance imaging approach, which significantly reduces intravoxel incoherent motion measurement bias introduced by motion. Simulations from displacement encoded motion data sets permitted the delineation of an optimal time-window with minimal cardiac motion. A number of single-shot repetitions of low b-value DWI cardiac magnetic resonance imaging data were acquired during this time-window under free-breathing conditions with bulk physiological motion corrected for by using nonrigid registration. Principal component analysis (PCA) was performed on the registered images to improve the SNR, and temporal maximum intensity projection (TMIP) was applied to recover signal intensity from time-fluctuant motion-induced signal loss. This PCATMIP method was validated with experimental data, and its benefits were evaluated in volunteers before being applied to patients. Optimal time-window cardiac DWI in combination with PCATMIP postprocessing yielded significant benefits for signal recovery, contrast-to-noise ratio, and SNR in the presence of bulk motion for both numerical simulations and human volunteer studies. Analysis of mean apparent diffusion coefficient (ADC) maps showed homogeneous values among volunteers and good reproducibility between free-breathing and breath-hold acquisitions. The

  16. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  17. Maximize, minimize or target - optimization for a fitted response from a designed experiment

    DOE PAGES

    Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu

    2016-04-01

    One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.

  18. Optimization of land use of agricultural farms in Sumedang regency by using linear programming models

    NASA Astrophysics Data System (ADS)

    Zenis, F. M.; Supian, S.; Lesmana, E.

    2018-03-01

    Land is one of the most important assets for farmers in Sumedang Regency. Therefore, agricultural land should be used optimally. This study aims to obtain the optimal land use composition in order to obtain maximum income. The optimization method used in this research is Linear Programming Models. Based on the results of the analysis, the composition of land use for rice area of 135.314 hectares, corn area of 11.798 hectares, soy area of 2.290 hectares, and peanuts of 2.818 hectares with the value of farmers income of IDR 2.682.020.000.000,-/year. The results of this analysis can be used as a consideration in decisions making about cropping patterns by farmers.

  19. Optimal pricing and replenishment policies for instantaneous deteriorating items with backlogging and trade credit under inflation

    NASA Astrophysics Data System (ADS)

    Sundara Rajan, R.; Uthayakumar, R.

    2017-12-01

    In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.

  20. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Optimization of turning process through the analytic flank wear modelling

    NASA Astrophysics Data System (ADS)

    Del Prete, A.; Franchi, R.; De Lorenzis, D.

    2018-05-01

    In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.

  2. Strain sensors optimal placement for vibration-based structural health monitoring. The effect of damage on the initially optimal configuration

    NASA Astrophysics Data System (ADS)

    Loutas, T. H.; Bourikas, A.

    2017-12-01

    We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.

  3. Optimization of glibenclamide tablet composition through the combined use of differential scanning calorimetry and D-optimal mixture experimental design.

    PubMed

    Mura, P; Furlanetto, S; Cirri, M; Maestrelli, F; Marras, A M; Pinzauti, S

    2005-02-07

    A systematic analysis of the influence of different proportions of excipients on the stability of a solid dosage form was carried out. In particular, a d-optimal mixture experimental design was applied for the evaluation of glibenclamide compatibility in tablet formulations, consisting of four classic excipients (natrosol as binding agent, stearic acid as lubricant, sorbitol as diluent and cross-linked polyvinylpyrrolidone as disintegrant). The goal was to find the mixture component proportions which correspond to the optimal drug melting parameters, i.e. its maximum stability, using differential scanning calorimetry (DSC) to quickly obtain information about possible interactions among the formulation components. The absolute value of the difference between the melting peak temperature of pure drug endotherm and that in each analysed mixture and the absolute value of the difference between the enthalpy of the pure glibenclamide melting peak and that of its melting peak in the different analyzed mixtures, were chosen as indexes of the drug-excipient interaction degree.

  4. Value of Defect Information in Automated Hardwood Edger and Trimmer Systems

    Treesearch

    Carmen Regalado; D. Earl Kline; Philip A. Araman

    1992-01-01

    Due to the limited capability of board defect scanners, not all defect information required to make the best edging and trimming decision can be scanned for use in an automated system. The objective of the study presented in this paper was to evaluate the lumber value obtainable from edging and trimming optimization using varying levels of defect information as input....

  5. [Coupling AFM fluid imaging with micro-flocculation filtration process for the technological optimization].

    PubMed

    Zheng, Bei; Ge, Xiao-peng; Yu, Zhi-yong; Yuan, Sheng-guang; Zhang, Wen-jing; Sun, Jing-fang

    2012-08-01

    Atomic force microscope (AFM) fluid imaging was applied to the study of micro-flocculation filtration process and the optimization of micro-flocculation time and the agitation intensity of G values. It can be concluded that AFM fluid imaging proves to be a promising tool in the observation and characterization of floc morphology and the dynamic coagulation processes under aqueous environmental conditions. Through the use of AFM fluid imaging technique, optimized conditions for micro-flocculation time of 2 min and the agitation intensity (G value) of 100 s(-1) were obtained in the treatment of dye-printing industrial tailing wastewater by the micro-flocculation filtration process with a good performance.

  6. A model for the value of a business, some optimization problems in its operating procedures and the valuation of its debt

    NASA Astrophysics Data System (ADS)

    1997-12-01

    In this paper we present a model for the value of a firm based on observable variables and parameters: the annual turnover, the expenses, interest rates. This value is the solution of a parabolic partial differential equation. We show how the value of the company depends on its legal status such as its liability (that is, whether it is a Limited Company or a sole trader/partnership). We give examples of how the operating procedures can be optimized (for example, whether the firm should close down, relocate etc.). Finally, we show how the model can be used to value the debt issued by the firm.

  7. Chopped random-basis quantum optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  8. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  9. Parameter optimization of fusion splicing of photonic crystal fibers and conventional fibers to increase strength

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Zhang, Zuchen; Song, Jingming; Wu, Chunxiao; Song, Ningfang

    2015-03-01

    A splicing parameter optimization method to increase the tensile strength of splicing joint between photonic crystal fiber (PCF) and conventional fiber is demonstrated. Based on the splicing recipes provided by splicer or fiber manufacturers, the optimal values of some major splicing parameters are obtained in sequence, and a conspicuous improvement in the mechanical strength of splicing joints between PCFs and conventional fibers is validated through experiments.

  10. Optimization of torrefaction conditions of coffee industry residues using desirability function approach.

    PubMed

    Buratti, C; Barbanera, M; Lascaro, E; Cotana, F

    2018-03-01

    The aim of the present study is to analyze the influence of independent process variables such as temperature, residence time, and heating rate on the torrefaction process of coffee chaff (CC) and spent coffee grounds (SCGs). Response surface methodology and a three-factor and three-level Box-Behnken design were used in order to evaluate the effects of the process variables on the weight loss (W L ) and the Higher Heating Value (HHV) of the torrefied materials. Results showed that the effects of the three factors on both responses were sequenced as follows: temperature>residence time>heating rate. Data obtained from the experiments were analyzed by analysis of variance (ANOVA) and fitted to second-order polynomial models by using multiple regression analysis. Predictive models were determined, able to obtain satisfactory fittings of the experimental data, with coefficient of determination (R 2 ) values higher than 0.95. An optimization study using Derringer's desired function methodology was also carried out and the optimal torrefaction conditions were found: temperature 271.7°C, residence time 20min, heating rate 5°C/min for CC and 256.0°C, 20min, 25°C/min for SCGs. The experimental values closely agree with the corresponding predicted values. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Modelling and Optimising the Value of a Hybrid Solar-Wind System

    NASA Astrophysics Data System (ADS)

    Nair, Arjun; Murali, Kartik; Anbuudayasankar, S. P.; Arjunan, C. V.

    2017-05-01

    In this paper, a net present value (NPV) approach for a solar hybrid system has been presented. The system, in question aims at supporting an investor by assessing an investment in solar-wind hybrid system in a given area. The approach follow a combined process of modelling the system, with optimization of major investment-related variables to maximize the financial yield of the investment. The consideration of solar wind hybrid supply presents significant potential for cost reduction. The investment variables concern the location of solar wind plant, and its sizing. The system demand driven, meaning that its primary aim is to fully satisfy the energy demand of the customers. Therefore, the model is a practical tool in the hands of investor to assess and optimize in financial terms an investment aiming at covering real energy demand. Optimization is performed by taking various technical, logical constraints. The relation between the maximum power obtained between individual system and the hybrid system as a whole in par with the net present value of the system has been highlighted.

  12. Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2003-01-01

    Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.

  13. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    NASA Astrophysics Data System (ADS)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations

  14. Risk management for optimal land use planning integrating ecosystem services values: A case study in Changsha, Middle China.

    PubMed

    Liang, Jie; Zhong, Minzhou; Zeng, Guangming; Chen, Gaojie; Hua, Shanshan; Li, Xiaodong; Yuan, Yujie; Wu, Haipeng; Gao, Xiang

    2017-02-01

    Land-use change has direct impact on ecosystem services and alters ecosystem services values (ESVs). Ecosystem services analysis is beneficial for land management and decisions. However, the application of ESVs for decision-making in land use decisions is scarce. In this paper, a method, integrating ESVs to balance future ecosystem-service benefit and risk, is developed to optimize investment in land for ecological conservation in land use planning. Using ecological conservation in land use planning in Changsha as an example, ESVs is regarded as the expected ecosystem-service benefit. And uncertainty of land use change is regarded as risk. This method can optimize allocation of investment in land to improve ecological benefit. The result shows that investment should be partial to Liuyang City to get higher benefit. The investment should also be shifted from Liuyang City to other regions to reduce risk. In practice, lower limit and upper limit for weight distribution, which affects optimal outcome and selection of investment allocation, should be set in investment. This method can reveal the optimal spatial allocation of investment to maximize the expected ecosystem-service benefit at a given level of risk or minimize risk at a given level of expected ecosystem-service benefit. Our results of optimal analyses highlight tradeoffs between future ecosystem-service benefit and uncertainty of land use change in land use decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  16. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  17. Remote measurement of water color in coastal waters. [spectral radiance data used to obtain quantitative values for chlorophyll and turbidity

    NASA Technical Reports Server (NTRS)

    Weldon, J. W.

    1973-01-01

    An investigation was conducted to develop procedure to obtain quantitative values for chlorophyll and turbidity in coastal waters by observing the changes in spectral radiance of the backscattered spectrum. The technique under consideration consists of Examining Exotech model 20-D spectral radiometer data and determining which radiance ratios best correlated with chlorophyll and turbidity measurements as obtained from analyses of water samples and sechi visibility readings. Preliminary results indicate that there is a correlation between backscattered light and chlorophyll concentration and secchi visibility. The tests were conducted with the spectrometer mounted in a light aircraft over the Mississippi Sound at altitudes of 2.5K, 2.8K and 10K feet.

  18. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models

    PubMed Central

    Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907

  19. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models.

    PubMed

    Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .

  20. Wood-Polymer composites obtained by gamma irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gago, J.; Lopez, A.; Rodriguez, J.

    2007-10-26

    In this work we impregnate three Peruvian woods (Calycophy spruceanum Be, Aniba amazonica Meiz and Hura crepitans L) with styrene-polyester resin and methyl methacrylate. The polymerization of the system was promoted by gamma radiation and the experimental optimal condition was obtained with styrene-polyester 1:1 and 15 kGy. The obtained composites show reduced water absorption and better mechanical properties compared to the original wood. The structure of the wood-polymer composites was studied by light microscopy. Water absorption and hardness were also obtained.

  1. Optimization of brain PET imaging for a multicentre trial: the French CATI experience.

    PubMed

    Habert, Marie-Odile; Marie, Sullivan; Bertin, Hugo; Reynal, Moana; Martini, Jean-Baptiste; Diallo, Mamadou; Kas, Aurélie; Trébossen, Régine

    2016-12-01

    CATI is a French initiative launched in 2010 to handle the neuroimaging of a large cohort of subjects recruited for an Alzheimer's research program called MEMENTO. This paper presents our test protocol and results obtained for the 22 PET centres (overall 13 different scanners) involved in the MEMENTO cohort. We determined acquisition parameters using phantom experiments prior to patient studies, with the aim of optimizing PET quantitative values to the highest possible per site, while reducing, if possible, variability across centres. Jaszczak's and 3D-Hoffman's phantom measurements were used to assess image spatial resolution (ISR), recovery coefficients (RC) in hot and cold spheres, and signal-to-noise ratio (SNR). For each centre, the optimal reconstruction parameters were chosen as those maximizing ISR and RC without a noticeable decrease in SNR. Point-spread-function (PSF) modelling reconstructions were discarded. The three figures of merit extracted from the images reconstructed with optimized parameters and routine schemes were compared, as were volumes of interest ratios extracted from Hoffman acquisitions. The net effect of the 3D-OSEM reconstruction parameter optimization was investigated on a subset of 18 scanners without PSF modelling reconstruction. Compared to the routine parameters of the 22 PET centres, average RC in the two smallest hot and cold spheres and average ISR remained stable or were improved with the optimized reconstruction, at the expense of slight SNR degradation, while the dispersion of values was reduced. For the subset of scanners without PSF modelling, the mean RC of the smallest hot sphere obtained with the optimized reconstruction was significantly higher than with routine reconstruction. The putamen and caudate-to-white matter ratios measured on 3D-Hoffman acquisitions of all centres were also significantly improved by the optimization, while the variance was reduced. This study provides guidelines for optimizing quantitative

  2. The Optimal Cut-Off Value of Neutrophil-to-Lymphocyte Ratio for Predicting Prognosis in Adult Patients with Henoch–Schönlein Purpura

    PubMed Central

    Park, Chan Hyuk; Han, Dong Soo; Jeong, Jae Yoon; Eun, Chang Soo; Yoo, Kyo-Sang; Jeon, Yong Cheol; Sohn, Joo Hyun

    2016-01-01

    Background The development of gastrointestinal (GI) bleeding and end-stage renal disease (ESRD) can be a concern in the management of Henoch–Schönlein purpura (HSP). We aimed to evaluate whether the neutrophil-to-lymphocyte ratio (NLR) is associated with the prognosis of adult patients with HSP. Methods Clinical data including the NLR of adult patients with HSP were retrospectively analyzed. Patients were classified into three groups as follows: (a) simple recovery, (b) wax & wane without GI bleeding, and (c) development of GI bleeding. The optimal cut-off value was determined using a receiver operating characteristics curve and the Youden index. Results A total of 66 adult patients were enrolled. The NLR was higher in the GI bleeding group than in the simple recovery or wax & wane group (simple recovery vs. wax & wane vs. GI bleeding; median [IQR], 2.32 [1.61–3.11] vs. 3.18 [2.16–3.71] vs. 7.52 [4.91–10.23], P<0.001). For the purpose of predicting simple recovery, the optimal cut-off value of NLR was 3.18, and the sensitivity and specificity were 74.1% and 75.0%, respectively. For predicting development of GI bleeding, the optimal cut-off value was 3.90 and the sensitivity and specificity were 87.5% and 88.6%, respectively. Conclusions The NLR is useful for predicting development of GI bleeding as well as simple recovery without symptom relapse. Two different cut-off values of NLR, 3.18 for predicting an easy recovery without symptom relapse and 3.90 for predicting GI bleeding can be used in adult patients with HSP. PMID:27073884

  3. Field-design optimization with triangular heliostat pods

    NASA Astrophysics Data System (ADS)

    Domínguez-Bravo, Carmen-Ana; Bode, Sebastian-James; Heiming, Gregor; Richter, Pascal; Carrizosa, Emilio; Fernández-Cara, Enrique; Frank, Martin; Gauché, Paul

    2016-05-01

    In this paper the optimization of a heliostat field with triangular heliostat pods is addressed. The use of structures which allow the combination of several heliostats into a common pod system aims to reduce the high costs associated with the heliostat field and therefore reduces the Levelized Cost of Electricity value. A pattern-based algorithm and two pattern-free algorithms are adapted to handle the field layout problem with triangular heliostat pods. Under the Helio100 project in South Africa, a new small-scale Solar Power Tower plant has been recently constructed. The Helio100 plant has 20 triangular pods (each with 6 heliostats) whose positions follow a linear pattern. The obtained field layouts after optimization are compared against the reference field Helio100.

  4. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  5. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses

    PubMed Central

    Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram

    2016-01-01

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result

  6. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  7. Optimal plane change during constant altitude hypersonic flight

    NASA Technical Reports Server (NTRS)

    Mease, K. D.; Vinh, N. X.; Kuo, S. H.

    1988-01-01

    Future spacecraft operating in the vicinity of the earth may have resort to the atmosphere as an aid in effecting orbital change. While a previous treatment of this technique chose constant altitude, speed, and angle-of-attack values in order to maximize the plane change for a fixed amount of propellant consumption during hypersonic flight, the former two parameters are presently released from the constraint of constancy. The general characteristics of the optimal controls are described on the basis of the domain of maneuverability, and numerical solutions are obtained for several specific cases. Under the condition of constant-altitude flight, it is generally not optimal to fly at constant angle-of-attack.

  8. Optimization design and analysis of the pavement planer scraper structure

    NASA Astrophysics Data System (ADS)

    Fang, Yuanbin; Sha, Hongwei; Yuan, Dajun; Xie, Xiaobing; Yang, Shibo

    2018-03-01

    By LS-DYNA, it establishes the finite element model of road milling machine scraper, and analyses the dynamic simulation. Through the optimization of the scraper structure and scraper angle, obtain the optimal structure of milling machine scraper. At the same time, the simulation results are verified. The results show that the scraper structure is improved that cemented carbide is located in the front part of the scraper substrate. Compared with the working resistance before improvement, it tends to be gentle and the peak value is smaller. The cutting front angle and the cutting back angle are optimized. The cutting front angle is 6 degrees and the cutting back angle is 9 degrees. The resultant of forces which contains the working resistance and the impact force is the least. It proves accuracy of the simulation results and provides guidance for further optimization work.

  9. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  10. Determination of thermodynamic values of acidic dissociation constants and complexation constants of profens and their utilization for optimization of separation conditions by Simul 5 Complex.

    PubMed

    Riesová, Martina; Svobodová, Jana; Ušelová, Kateřina; Tošner, Zdeněk; Zusková, Iva; Gaš, Bohuslav

    2014-10-17

    In this paper we determine acid dissociation constants, limiting ionic mobilities, complexation constants with β-cyclodextrin or heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin, and mobilities of resulting complexes of profens, using capillary zone electrophoresis and affinity capillary electrophoresis. Complexation parameters are determined for both neutral and fully charged forms of profens and further corrected for actual ionic strength and variable viscosity in order to obtain thermodynamic values of complexation constants. The accuracy of obtained complexation parameters is verified by multidimensional nonlinear regression of affinity capillary electrophoretic data, which provides the acid dissociation and complexation parameters within one set of measurements, and by NMR technique. A good agreement among all discussed methods was obtained. Determined complexation parameters were used as input parameters for simulations of electrophoretic separation of profens by Simul 5 Complex. An excellent agreement of experimental and simulated results was achieved in terms of positions, shapes, and amplitudes of analyte peaks, confirming the applicability of Simul 5 Complex to complex systems, and accuracy of obtained physical-chemical constants. Simultaneously, we were able to demonstrate the influence of electromigration dispersion on the separation efficiency, which is not possible using the common theoretical approaches, and predict the electromigration order reversals of profen peaks. We have shown that determined acid dissociation and complexation parameters in combination with tool Simul 5 Complex software can be used for optimization of separation conditions in capillary electrophoresis. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Sitting biomechanics, part II: optimal car driver's seat and optimal driver's spinal model.

    PubMed

    Harrison, D D; Harrison, S O; Croft, A C; Harrison, D E; Troyanovich, S J

    2000-01-01

    Driving has been associated with signs and symptoms caused by vibrations. Sitting causes the pelvis to rotate backwards and the lumbar lordosis to reduce. Lumbar support and armrests reduce disc pressure and electromyographically recorded values. However, the ideal driver's seat and an optimal seated spinal model have not been described. To determine an optimal automobile seat and an ideal spinal model of a driver. Information was obtained from peer-reviewed scientific journals and texts, automotive engineering reports, and the National Library of Medicine. Driving predisposes vehicle operators to low-back pain and degeneration. The optimal seat would have an adjustable seat back incline of 100 degrees from horizontal, a changeable depth of seat back to front edge of seat bottom, adjustable height, an adjustable seat bottom incline, firm (dense) foam in the seat bottom cushion, horizontally and vertically adjustable lumbar support, adjustable bilateral arm rests, adjustable head restraint with lordosis pad, seat shock absorbers to dampen frequencies in the 1 to 20 Hz range, and linear front-back travel of the seat enabling drivers of all sizes to reach the pedals. The lumbar support should be pulsating in depth to reduce static load. The seat back should be damped to reduce rebounding of the torso in rear-end impacts. The optimal driver's spinal model would be the average Harrison model in a 10 degrees posterior inclining seat back angle.

  12. Optimization of intermittent microwave–convective drying using response surface methodology

    PubMed Central

    Aghilinategh, Nahid; Rafiee, Shahin; Hosseinpur, Soleiman; Omid, Mahmoud; Mohtasebi, Seyed Saeid

    2015-01-01

    In this study, response surface methodology was used for optimization of intermittent microwave–convective air drying (IMWC) parameters with employing desirability function. Optimization factors were air temperature (40–80°C), air velocity (1–2 m/sec), pulse ratio) PR ((2–6), and microwave power (200–600 W) while responses were rehydration ratio, bulk density, total phenol content (TPC), color change, and energy consumption. Minimum color change, bulk density, energy consumption, maximum rehydration ratio, and TPC were assumed as criteria for optimizing drying conditions of apple slices in IMWC. The optimum values of process variables were 1.78 m/sec air velocity, 40°C air temperature, PR 4.48, and 600 W microwave power that characterized by maximum desirability function (0.792) using Design expert 8.0. The air temperature and microwave power had significant effect on total responses, but the role of air velocity can be ignored. Generally, the results indicated that it was possible to obtain a higher desirability value if the microwave power and temperature, respectively, increase and decrease. PMID:26286706

  13. Methodology of Numerical Optimization for Orbital Parameters of Binary Systems

    NASA Astrophysics Data System (ADS)

    Araya, I.; Curé, M.

    2010-02-01

    The use of a numerical method of maximization (or minimization) in optimization processes allows us to obtain a great amount of solutions. Therefore, we can find a global maximum or minimum of the problem, but this is only possible if we used a suitable methodology. To obtain the global optimum values, we use the genetic algorithm called PIKAIA (P. Charbonneau) and other four algorithms implemented in Mathematica. We demonstrate that derived orbital parameters of binary systems published in some papers, based on radial velocity measurements, are local minimum instead of global ones.

  14. Optimization and application of influence function in abrasive jet polishing.

    PubMed

    Li, Zhaoze; Li, Shengyi; Dai, Yifan; Peng, Xiaoqiang

    2010-05-20

    We analyze the material removal mechanism of abrasive jet polishing (AJP) technology, based on the fluid impact dynamics theory. Combined with the computational fluid dynamics simulation and process experiments, influence functions at different impingement angles are obtained, which are not of a regular Gaussian shape and are unfit for the corrective figuring of optics. The influence function is then optimized to obtain an ideal Gaussian shape by rotating the oblique nozzle, and its stability is validated through a line scanning experiment. The fluctuation of the influence function can be controlled within +/-5%. Based on this, we build a computed numerically controlled experimental system for AJP, and one flat BK7 optical glass with a diameter of 20mm is polished. After two iterations of polishing, the peak-to-valley value decreases from 1.43lambda (lambda=632.8nm in this paper) to 0.294lambda, and the rms value decreases from 0.195lambda to 0.029lambda. The roughness of this polished surface is within 2nm. The experimental result indicates that the optimized influence function is suitable for precision optics figuring and polishing.

  15. Ship Trim Optimization: Assessment of Influence of Trim on Resistance of MOERI Container Ship

    PubMed Central

    Duan, Wenyang

    2014-01-01

    Environmental issues and rising fuel prices necessitate better energy efficiency in all sectors. Shipping industry is a stakeholder in environmental issues. Shipping industry is responsible for approximately 3% of global CO2 emissions, 14-15% of global NOX emissions, and 16% of global SOX emissions. Ship trim optimization has gained enormous momentum in recent years being an effective operational measure for better energy efficiency to reduce emissions. Ship trim optimization analysis has traditionally been done through tow-tank testing for a specific hullform. Computational techniques are increasingly popular in ship hydrodynamics applications. The purpose of this study is to present MOERI container ship (KCS) hull trim optimization by employing computational methods. KCS hull total resistances and trim and sinkage computed values, in even keel condition, are compared with experimental values and found in reasonable agreement. The agreement validates that mesh, boundary conditions, and solution techniques are correct. The same mesh, boundary conditions, and solution techniques are used to obtain resistance values in different trim conditions at Fn = 0.2274. Based on attained results, optimum trim is suggested. This research serves as foundation for employing computational techniques for ship trim optimization. PMID:24578649

  16. An optimal open/closed-loop control method with application to a pre-stressed thin duralumin plate

    NASA Astrophysics Data System (ADS)

    Nadimpalli, Sruthi Raju

    The excessive vibrations of a pre-stressed duralumin plate, suppressed by a combination of open-loop and closed-loop controls, also known as open/closed-loop control, is studied in this thesis. The two primary steps involved in this process are: Step (I) with an assumption that the closed-loop control law is proportional, obtain the optimal open-loop control by direct minimization of the performance measure consisting of energy at terminal time and a penalty on open-loop control force via calculus of variations. If the performance measure also involves a penalty on closed-loop control effort then a Fourier based method is utilized. Step (II) the energy at terminal time is minimized numerically to obtain optimal values of feedback gains. The optimal closed-loop control gains obtained are used to describe the displacement and the velocity of open-loop, closed-loop and open/closed-loop controlled duralumin plate.

  17. Sequentially Integrated Optimization of the Conditions to Obtain a High-Protein and Low-Antinutritional Factors Protein Isolate from Edible Jatropha curcas Seed Cake.

    PubMed

    León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto

    2013-01-01

    Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g(-1) of total phenolics, 2891.84 mg 100 g(-1) of phytates and 168 mg 100 g(-1) of saponins. The protein content of the this isolate was higher than the content reported by other authors.

  18. Sequentially Integrated Optimization of the Conditions to Obtain a High-Protein and Low-Antinutritional Factors Protein Isolate from Edible Jatropha curcas Seed Cake

    PubMed Central

    León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto

    2013-01-01

    Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g−1 of total phenolics, 2891.84 mg 100 g−1 of phytates and 168 mg 100 g−1 of saponins. The protein content of the this isolate was higher than the content reported by other authors. PMID:25937971

  19. A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Quan, Ning; Kim, Harrison M.

    2018-03-01

    The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.

  20. Chemometric optimization of the robustness of the near infrared spectroscopic method in wheat quality control.

    PubMed

    Pojić, Milica; Rakić, Dušan; Lazić, Zivorad

    2015-01-01

    A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  2. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    PubMed

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  3. Design and Optimization Method of a Two-Disk Rotor System

    NASA Astrophysics Data System (ADS)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  4. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-06-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  5. Optimal waist circumference cut-off values for predicting cardiovascular risk factors in a multi-ethnic Malaysian population.

    PubMed

    Cheong, Kee C; Ghazali, Sumarni M; Hock, Lim K; Yusoff, Ahmad F; Selvarajah, Sharmini; Haniff, Jamaiyah; Zainuddin, Ahmad Ali; Ying, Chan Y; Lin, Khor G; Rahman, Jamalludin A; Shahar, Suzana; Mustafa, Amal N

    2014-01-01

    Previous studies have proposed the lower waist circumference (WC) cutoffs be used for defining abdominal obesity in Asian populations. To determine the optimal cut-offs of waist circumference (WC) in predicting cardiovascular (CV) risk factors in the multi-ethnic Malaysian population. We analysed data from 32,703 respondents (14,980 men and 17,723 women) aged 18 years and above who participated in the Third National Health and Morbidity Survey in 2006. Gender-specific logistic regression analyses were used to examine associations between WC and three CV risk factors (diabetes mellitus, hypertension, and hypercholesterolemia). The Receiver Operating Characteristic (ROC) curves were used to determine the cut-off values of WC with optimum sensitivity and specificity for detecting these CV risk factors. The odds ratio for having diabetes mellitus, hypertension, and hypercholesterolemia, or at least one of these risks, increased significantly as the WC cut-off point increased. Optimal WC cut-off values for predicting the presence of diabetes mellitus, hypertension, hypercholesterolemia and at least one of the three CV risk factors varied from 81.4 to 85.5 cm for men and 79.8 to 80.7 cm for women. Our findings indicate that WC cut-offs of 81 cm for men and 80 cm for women are appropriate for defining abdominal obesity and for recommendation to undergo cardiovascular risk screening and weight management in the Malaysian adult population. © 2014 Asian Oceanian Association for the Study of Obesity . Published by Elsevier Ltd. All rights reserved.

  6. An artificial system for selecting the optimal surgical team.

    PubMed

    Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco

    2015-01-01

    We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.

  7. Optimal experimental design for placement of boreholes

    NASA Astrophysics Data System (ADS)

    Padalkina, Kateryna; Bücker, H. Martin; Seidler, Ralf; Rath, Volker; Marquart, Gabriele; Niederau, Jan; Herty, Michael

    2014-05-01

    Drilling for deep resources is an expensive endeavor. Among the many problems finding the optimal drilling location for boreholes is one of the challenging questions. We contribute to this discussion by using a simulation based assessment of possible future borehole locations. We study the problem of finding a new borehole location in a given geothermal reservoir in terms of a numerical optimization problem. In a geothermal reservoir the temporal and spatial distribution of temperature and hydraulic pressure may be simulated using the coupled differential equations for heat transport and mass and momentum conservation for Darcy flow. Within this model the permeability and thermal conductivity are dependent on the geological layers present in the subsurface model of the reservoir. In general, those values involve some uncertainty making it difficult to predict actual heat source in the ground. Within optimal experimental the question is which location and to which depth to drill the borehole in order to estimate conductivity and permeability with minimal uncertainty. We introduce a measure for computing the uncertainty based on simulations of the coupled differential equations. The measure is based on the Fisher information matrix of temperature data obtained through the simulations. We assume that the temperature data is available within the full borehole. A minimization of the measure representing the uncertainty in the unknown permeability and conductivity parameters is performed to determine the optimal borehole location. We present the theoretical framework as well as numerical results for several 2d subsurface models including up to six geological layers. Also, the effect of unknown layers on the introduced measure is studied. Finally, to obtain a more realistic estimate of optimal borehole locations, we couple the optimization to a cost model for deep drilling problems.

  8. Combined mixture-process variable approach: a suitable statistical tool for nanovesicular systems optimization.

    PubMed

    Habib, Basant A; AbouGhaly, Mohamed H H

    2016-06-01

    This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.

  9. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    PubMed

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. Contract portfolio optimization for a gasoline supply chain

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan

    this model, we characterize a simple and easily implementable dynamic contract portfolio policy that would enable the company to dynamically rebalance its supply contract portfolio over time in anticipation of the future market conditions in each individual channel while satisfying the contractual obligations. The optimal policy is a state-dependent base-share contract portfolio policy characterized by a branded base-share level and an unbranded contract commitment combination, given as a function of the initial information state. Using real-world market data, we estimate the model parameters. We also apply an efficient modified policy iteration method to compute the optimal contract portfolio strategies and corresponding profit value. We present computational results in order to obtain insights into the structure of optimal policies, capture the value of the dynamic contract portfolio policy by comparing it with static policies, and illustrate the sensitivity of the optimal contract portfolio and corresponding profit value in terms of the different parameters. Considering the geographic dispersion of different market areas and the pipeline network together with the dynamic contract portfolio optimization problem, we formulate a forward-looking operational model, which could be used by gasoline suppliers for lower-level planning. Finally, we discuss the generalization of the framework to other problems and applications, as well as further research.

  11. Value function in economic growth model

    NASA Astrophysics Data System (ADS)

    Bagno, Alexander; Tarasyev, Alexandr A.; Tarasyev, Alexander M.

    2017-11-01

    Properties of the value function are examined in an infinite horizon optimal control problem with an unlimited integrand index appearing in the quality functional with a discount factor. Optimal control problems of such type describe solutions in models of economic growth. Necessary and sufficient conditions are derived to ensure that the value function satisfies the infinitesimal stability properties. It is proved that value function coincides with the minimax solution of the Hamilton-Jacobi equation. Description of the growth asymptotic behavior for the value function is provided for the logarithmic, power and exponential quality functionals and an example is given to illustrate construction of the value function in economic growth models.

  12. Modeling and optimization of proton-conducting solid oxide electrolysis cell: Conversion of CO2 into value-added products

    NASA Astrophysics Data System (ADS)

    Namwong, Lawit; Authayanun, Suthida; Saebea, Dang; Patcharavorachot, Yaneeporn; Arpornwichanop, Amornchai

    2016-11-01

    Proton-conducting solid oxide electrolysis cells (SOEC-H+) are a promising technology that can utilize carbon dioxide to produce syngas. In this work, a detailed electrochemical model was developed to predict the behavior of SOEC-H+ and to prove the assumption that the syngas is produced through a reversible water gas-shift (RWGS) reaction. The simulation results obtained from the model, which took into account all of the cell voltage losses (i.e., ohmic, activation, and concentration losses), were validated using experimental data to evaluate the unknown parameters. The developed model was employed to examine the structural and operational parameters. It is found that the cathode-supported SOEC-H+ is the best configuration because it requires the lowest cell potential. SOEC-H+ operated favorably at high temperatures and low pressures. Furthermore, the simulation results revealed that the optimal S/C molar ratio for syngas production, which can be used for methanol synthesis, is approximately 3.9 (at a constant temperature and pressure). The SOEC-H+ was optimized using a response surface methodology, which was used to determine the optimal operating conditions to minimize the cell potential and maximize the carbon dioxide flow rate.

  13. The value of value of information: best informing research design and prioritization using current methods.

    PubMed

    Eckermann, Simon; Karnon, Jon; Willan, Andrew R

    2010-01-01

    Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of

  14. Optimization of scaffold design for bone tissue engineering: A computational and experimental study.

    PubMed

    Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R

    2014-04-01

    In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Sensitivity of NTCP parameter values against a change of dose calculation algorithm.

    PubMed

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.

  16. Oral bioavailability enhancement of raloxifene by developing microemulsion using D-optimal mixture design: optimization and in-vivo pharmacokinetic study.

    PubMed

    Shah, Nirmal; Seth, Avinashkumar; Balaraman, R; Sailor, Girish; Javia, Ankur; Gohil, Dipti

    2018-04-01

    The objective of this work was to utilize a potential of microemulsion for the improvement in oral bioavailability of raloxifene hydrochloride, a BCS class-II drug with 2% bioavailability. Drug-loaded microemulsion was prepared by water titration method using Capmul MCM C8, Tween 20, and Polyethylene glycol 400 as oil, surfactant, and co-surfactant respectively. The pseudo-ternary phase diagram was constructed between oil and surfactants mixture to obtain appropriate components and their concentration ranges that result in large existence area of microemulsion. D-optimal mixture design was utilized as a statistical tool for optimization of microemulsion considering oil, S mix , and water as independent variables with percentage transmittance and globule size as dependent variables. The optimized formulation showed 100 ± 0.1% transmittance and 17.85 ± 2.78 nm globule size which was identically equal with the predicted values of dependent variables given by the design expert software. The optimized microemulsion showed pronounced enhancement in release rate compared to plain drug suspension following diffusion controlled release mechanism by the Higuchi model. The formulation showed zeta potential of value -5.88 ± 1.14 mV that imparts good stability to drug loaded microemulsion dispersion. Surface morphology study with transmission electron microscope showed discrete spherical nano sized globules with smooth surface. In-vivo pharmacokinetic study of optimized microemulsion formulation in Wistar rats showed 4.29-fold enhancements in bioavailability. Stability study showed adequate results for various parameters checked up to six months. These results reveal the potential of microemulsion for significant improvement in oral bioavailability of poorly soluble raloxifene hydrochloride.

  17. Value versus Accuracy: application of seasonal forecasts to a hydro-economic optimization model for the Sudanese Blue Nile

    NASA Astrophysics Data System (ADS)

    Satti, S.; Zaitchik, B. F.; Siddiqui, S.; Badr, H. S.; Shukla, S.; Peters-Lidard, C. D.

    2015-12-01

    The unpredictable nature of precipitation within the East African (EA) region makes it one of the most vulnerable, food insecure regions in the world. There is a vital need for forecasts to inform decision makers, both local and regional, and to help formulate the region's climate change adaptation strategies. Here, we present a suite of different seasonal forecast models, both statistical and dynamical, for the EA region. Objective regionalization is performed for EA on the basis of interannual variability in precipitation in both observations and models. This regionalization is applied as the basis for calculating a number of standard skill scores to evaluate each model's forecast accuracy. A dynamically linked Land Surface Model (LSM) is then applied to determine forecasted flows, which drive the Sudanese Hydroeconomic Optimization Model (SHOM). SHOM combines hydrologic, agronomic and economic inputs to determine the optimal decisions that maximize economic benefits along the Sudanese Blue Nile. This modeling sequence is designed to derive the potential added value of information of each forecasting model to agriculture and hydropower management. A rank of each model's forecasting skill score along with its added value of information is analyzed in order compare the performance of each forecast. This research aims to improve understanding of how characteristics of accuracy, lead time, and uncertainty of seasonal forecasts influence their utility to water resources decision makers who utilize them.

  18. Apparent diffusion coefficient in the analysis of prostate cancer: determination of optimal b-value pair to differentiate normal from malignant tissue.

    PubMed

    Adubeiro, Nuno; Nogueira, Maria Luísa; Nunes, Rita G; Ferreira, Hugo Alexandre; Ribeiro, Eduardo; La Fuente, José Maria Ferreira

    Determining optimal b-value pair for differentiation between normal and prostate cancer (PCa) tissues. Forty-three patients with diagnosis or PCa symptoms were included. Apparent diffusion coefficient (ADC) was estimated using minimum and maximum b-values of 0, 50, 100, 150, 200, 500s/mm2 and 500, 800, 1100, 1400, 1700 and 2000s/mm2, respectively. Diagnostic performances were evaluated when Area-under-the-curve (AUC)>95%. 15 of the 35 b-values pair surpassed this AUC threshold. The pair (50, 2000s/mm2) provided the highest AUC (96%) with ADC cutoff 0.89×10- 3 mm 2 /s, sensitivity 95.5%, specificity 93.2% and accuracy 94.4%. The best b-value pair was b=50, 2000s/mm2. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  20. The value of a statistical life: a meta-analysis with a mixed effects regression model.

    PubMed

    Bellavance, François; Dionne, Georges; Lebeau, Martin

    2009-03-01

    The value of a statistical life (VSL) is a very controversial topic, but one which is essential to the optimization of governmental decisions. We see a great variability in the values obtained from different studies. The source of this variability needs to be understood, in order to offer public decision-makers better guidance in choosing a value and to set clearer guidelines for future research on the topic. This article presents a meta-analysis based on 39 observations obtained from 37 studies (from nine different countries) which all use a hedonic wage method to calculate the VSL. Our meta-analysis is innovative in that it is the first to use the mixed effects regression model [Raudenbush, S.W., 1994. Random effects models. In: Cooper, H., Hedges, L.V. (Eds.), The Handbook of Research Synthesis. Russel Sage Foundation, New York] to analyze studies on the value of a statistical life. We conclude that the variability found in the values studied stems in large part from differences in methodologies.

  1. Parameter Optimization for Turbulent Reacting Flows Using Adjoints

    NASA Astrophysics Data System (ADS)

    Lapointe, Caelan; Hamlington, Peter E.

    2017-11-01

    The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.

  2. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  3. Optimizing the Determination of Roughness Parameters for Model Urban Canopies

    NASA Astrophysics Data System (ADS)

    Huq, Pablo; Rahman, Auvi

    2018-05-01

    We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.

  4. Optimal correction and design parameter search by modern methods of rigorous global optimization

    NASA Astrophysics Data System (ADS)

    Makino, K.; Berz, M.

    2011-07-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  5. Design optimization of RF lines in vacuum environment for the MITICA experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Muri, Michela, E-mail: michela.demuri@igi.cnr.it; Consorzio RFX, Corso Stati Uniti, 4, I-35127 Padova; Pavei, Mauro

    This contribution regards the Radio Frequency (RF) transmission line of the Megavolt ITER Injector and Concept Advancement (MITICA) experiment. The original design considered copper coaxial lines of 1″ 5/8, but thermal simulations under operating conditions showed maximum temperatures of the lines at regime not compatible with the prescription of the component manufacturer. Hence, an optimization of the design was necessary. Enhancing thermal radiation and increasing the conductor size were considered for design optimization: thermal analyses were carried out to calculate the temperature of MITICA RF lines during operation, as a function of the emissivity value and of other geometrical parameters.more » Five coating products to increase the conductor surface emissivity were tested, measuring the outgassing behavior of the selected products and the obtained emissivity values.« less

  6. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  7. Turnover, account value and diversification of real traders: evidence of collective portfolio optimizing behavior

    NASA Astrophysics Data System (ADS)

    Morton de Lachapelle, David; Challet, Damien

    2010-07-01

    Despite the availability of very detailed data on financial markets, agent-based modeling is hindered by the lack of information about real trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution towards building a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, the largest online Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially includes variable transaction costs is able to reproduce faithfully the observed behaviors. We finally argue that our results bring to light the collective ability of a population to construct a mean-variance portfolio that takes into account the structure of transaction costs.

  8. Dimensional optimization of nanowire--complementary metal oxide--semiconductor inverter.

    PubMed

    Hashim, Yasir; Sidek, Othman

    2013-01-01

    This study is the first to demonstrate dimensional optimization of nanowire-complementary metal-oxide-semiconductor inverter. Noise margins and inflection voltage of transfer characteristics are used as limiting factors in this optimization. Results indicate that optimization depends on both dimensions ratio and digital voltage level (Vdd). Diameter optimization reveals that when Vdd increases, the optimized value of (Dp/Dn) decreases. Channel length optimization results show that when Vdd increases, the optimized value of Ln decreases and that of (Lp/Ln) increases. Dimension ratio optimization reveals that when Vdd increases, the optimized value of Kp/Kn decreases, and silicon nanowire transistor with suitable dimensions (higher Dp and Ln with lower Lp and Dn) can be fabricated.

  9. OPTIMIZATION OF COUNTERCURRENT STAGED PROCESSES.

    DTIC Science & Technology

    CHEMICAL ENGINEERING , OPTIMIZATION), (*DISTILLATION, OPTIMIZATION), INDUSTRIAL PRODUCTION, INDUSTRIAL EQUIPMENT, MATHEMATICAL MODELS, DIFFERENCE EQUATIONS, NONLINEAR PROGRAMMING, BOUNDARY VALUE PROBLEMS, NUMERICAL INTEGRATION

  10. Value centric approaches to the design, operations and maintenance of wind turbines

    NASA Astrophysics Data System (ADS)

    Khadabadi, Madhur Aravind

    Wind turbine maintenance is emerging as an unexpectedly high component of turbine operating cost, and there is an increasing interest in managing this cost. This thesis presents an alternative view of maintenance as a value-driver, and develops an optimization algorithm to evaluate the value delivered by different maintenance techniques. I view maintenance as an operation that moves the turbine to an improved state in which it can generate more power and, thus, earn more revenue. To implement this approach, I model the stochastic deterioration of the turbine in two dimensions: the deterioration rate, and the extent of deterioration, and then use maintenance to improve the state of the turbine. The value of the turbine is the difference between the revenue from to the power generation and the costs incurred in operation and maintenance. With a focus on blade deterioration, I evaluate the value delivered by implementing two different maintenance schemes, predictive maintenance and scheduled maintenance. An example of predictive maintenance technique is the use of Condition Monitoring Systems to precisely detect deterioration. I model Condition Monitoring System (CMS) of different degrees of fidelity, where a higher fidelity CMS would allow the blade state to be determined with a higher precision. The same model is then applied for the scheduled maintenance technique. The improved state information obtained from these techniques is then used to derive an optimal maintenance strategy. The difference between the value of the turbine with and without the inspection type can be interpreted as the value of the inspection. The results indicate that a higher fidelity (and more expensive) inspection method does not necessarily yield the highest value, and, that there is an optimal level of fidelity that results in maximum value. The results also aim to inform the operator of the impact of regional parameters such as wind speed, variance and maintenance costs to the optimal

  11. Optimal redesign study of the harm wing

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.; Weynand, M. E.

    1984-01-01

    The purpose of this project was to investigate the use of optimization techniques to improve the flutter margins of the HARM AGM-88A wing. The missile has four cruciform wings, located near mid-fuselage, that are actuated in pairs symmetrically and antisymmetrically to provide pitch, yaw, and roll control. The wings have a solid stainless steel forward section and a stainless steel crushed-honeycomb aft section. The wing restraint stiffness is dependent upon wing pitch amplitude and varies from a low value near neutral pitch attitude to a much higher value at off-neutral pitch attitudes, where aerodynamic loads lock out any free play in the control system. The most critical condition for flutter is the low-stiffness condition in which the wings are moved symmetrically. Although a tendency toward limit-cycle flutter is controlled in the current design by controller logic, wing redesign to improve this situation is attractive because it can be accomplished as a retrofit. In view of the exploratory nature of the study, it was decided to apply the optimization to a wing-only model, validated by comparison with results obtained by Texas Instruments (TI). Any wing designs that looked promising were to be evaluated at TI with more complicated models, including body modes. The optimization work was performed by McIntosh Structural Dynamics, Inc. (MSD) under a contract from TI.

  12. Optimization for energy efficiency of underground building envelope thermal performance in different climate zones of China

    NASA Astrophysics Data System (ADS)

    Shi, Luyang; Liu, Jing; Zhang, Huibo

    2017-11-01

    The object of this article is to investigate the influence of thermal performance of envelopes in shallow-buried buildings on energy consumption for different climate zones of China. For the purpose of this study, an effective building energy simulation tool (DeST) developed by Tsinghua University was chosen to model the heat transfer in underground buildings. Based on the simulative results, energy consumption for heating and cooling for the whole year was obtained. The results showed that the relationship between energy consumption and U-value of envelopes for underground buildings is different compared with above-ground buildings: improving thermal performance of exterior walls cannot reduce energy consumption, on the contrary, may result in more energy cost. Besides, it is can be derived that optimized U-values of underground building envelopes vary with climate zones of China in this study. For severe cold climate zone, the optimized U-value of underground building envelopes is 0.8W/(m2·K); for cold climate zone, the optimized U-value is 1.5W/(m2·K); for warm climate zone, the U-value is 2.0W/(m2·K).

  13. Optimized evaporation technique for leachate treatment: Small scale implementation.

    PubMed

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.

  14. The modification of hybrid method of ant colony optimization, particle swarm optimization and 3-OPT algorithm in traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Hertono, G. F.; Ubadah; Handari, B. D.

    2018-03-01

    The traveling salesman problem (TSP) is a famous problem in finding the shortest tour to visit every vertex exactly once, except the first vertex, given a set of vertices. This paper discusses three modification methods to solve TSP by combining Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and 3-Opt Algorithm. The ACO is used to find the solution of TSP, in which the PSO is implemented to find the best value of parameters α and β that are used in ACO.In order to reduce the total of tour length from the feasible solution obtained by ACO, then the 3-Opt will be used. In the first modification, the 3-Opt is used to reduce the total tour length from the feasible solutions obtained at each iteration, meanwhile, as the second modification, 3-Opt is used to reduce the total tour length from the entire solution obtained at every iteration. In the third modification, 3-Opt is used to reduce the total tour length from different solutions obtained at each iteration. Results are tested using 6 benchmark problems taken from TSPLIB by calculating the relative error to the best known solution as well as the running time. Among those modifications, only the second and third modification give satisfactory results except the second one needs more execution time compare to the third modifications.

  15. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  16. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  17. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  18. Evaluation of the optimal neutrophil gelatinase-associated lipocalin value as a screening biomarker for urinary tract infections in children.

    PubMed

    Kim, Bo Hyun; Yu, Nae; Kim, Hye Ryoun; Yun, Ki Wook; Lim, In Seok; Kim, Tae Hyoung; Lee, Mi-Kyung

    2014-09-01

    Neutrophil gelatinase-associated lipocalin (NGAL) is a promising biomarker in the detection of kidney injury. Early diagnosis of urinary tract infection (UTI), one of the most common infections in children, is important in order to avert long-term consequences. We assessed whether serum NGAL (sNGAL) or urine NGAL (uNGAL) would be reliable markers of UTI and evaluated the appropriate diagnostic cutoff value for the screening of UTI in children. A total of 812 urine specimens and 323 serum samples, collected from pediatric patients, were analyzed. UTI was diagnosed on the basis of culture results and symptoms reported by the patients. NGAL values were measured by using ELISA. NGAL values were more elevated in the UTI cases than in the non-UTI cases, but the difference between the values were not statistically significant (P=0.190 for sNGAL and P=0.064 for uNGAL). The optimal diagnostic cutoff values of sNGAL and uNGAL for UTI screening were 65.25 ng/mL and 5.75 ng/mL, respectively. We suggest that it is not appropriate to use NGAL as a marker for early diagnosis of UTI in children.

  19. Optimal exploration systems

    NASA Astrophysics Data System (ADS)

    Klesh, Andrew T.

    This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for

  20. Optimization and characterization of gelatin and chitosan extracted from fish and shrimp waste

    NASA Astrophysics Data System (ADS)

    Ait Boulahsen, M.; Chairi, H.; Laglaoui, A.; Arakrak, A.; Zantar, S.; Bakkali, M.; Hassani, M.

    2018-05-01

    Fish and seafood processing industries generate large quantities of waste which are at the origin of several environmental, economic and social problems. However fish waste could contain high value-added substances such as biopolymers. This work focuses on optimizing the gelatin and chitosan extraction from tilapia fish skins and shrimp shells respectively. The gelatin extraction process was optimized using alkali acid treatment prior to thermal hydrolysis. Three different acids were tested at different concentrations. Chitosan was obtained after acid demineralization followed by simultaneous hydrothermal deproteinization and deacetylation by an alkali treatment with different concentrations of HCl and NaOH. The extracted gelatin and chitosan with the highest yield were characterized by determining their main physicochemical properties (Degree of deacetylation, viscosity, pH, moisture and ash content). Results show a significant influence of the acid type and concentration on the extraction yield of gelatin and chitosan, with an average yield of 12.24% and 3.85% respectively. Furthermore, the obtained physicochemical properties of both extracted gelatin and chitosan were within the recommended standard values of the commercial ones used in the industry.

  1. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without

  2. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    PubMed

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  4. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  5. Dependence of optimal separative power of the “high-speed” Iguasu centrifuge on pressure of working gas

    NASA Astrophysics Data System (ADS)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Davidenko, O. V.; Tronin, I. V.; Tronin, V. N.

    2016-09-01

    The results of optimization calculations of the separative power of the ’’high-speed” Iguasu gas centrifuge are presented. Iguasu gas centrifuge has the rotational speed of 1000 m/s, the rotor length of 1 m. The dependence of the optimal separative power on the pressure of the working gas on the rotor wall was obtained using the numerical simulations. It is shown, that maximum of the optimal separative power corresponds to the pressure of 1100 mmHg. Maximum value of separative power is 31.9 SWU.

  6. Optimal solution and optimality condition of the Hunter-Saxton equation

    NASA Astrophysics Data System (ADS)

    Shen, Chunyu

    2018-02-01

    This paper is devoted to the optimal distributed control problem governed by the Hunter-Saxton equation with constraints on the control. We first investigate the existence and uniqueness of weak solution for the controlled system with appropriate initial value and boundary conditions. In contrast with our previous research, the proof of solution mapping is local Lipschitz continuous, which is one big improvement. Second, based on the well-posedness result, we find a unique optimal control and optimal solution for the controlled system with the quadratic cost functional. Moreover, we establish the sufficient and necessary optimality condition of an optimal control by means of the optimal control theory, not limited to the necessary condition, which is another major novelty of this paper. We also discuss the optimality conditions corresponding to two physical meaningful distributed observation cases.

  7. Obtaining value prior to pulping with diethyl oxalate and oxalic acid

    Treesearch

    W.R. Kenealy; E. Horn; C.J. Houtman; J. Laplaza; T.W. Jeffries

    2007-01-01

    Pulp and paper are converted to paper products with yields of paper dependent on the wood and the process used. Even with high yield pulps there are conversion losses and with chemical pulps the yields approach 50%. The portions of the wood that do not provide product are either combusted to generate power and steam or incur a cost in waste water treatment. Value prior...

  8. The Value of Methodical Management: Optimizing Science Results

    NASA Astrophysics Data System (ADS)

    Saby, Linnea

    2016-01-01

    As science progresses, making new discoveries in radio astronomy becomes increasingly complex. Instrumentation must be incredibly fine-tuned and well-understood, scientists must consider the skills and schedules of large research teams, and inter-organizational projects sometimes require coordination between observatories around the globe. Structured and methodical management allows scientists to work more effectively in this environment and leads to optimal science output. This report outlines the principles of methodical project management in general, and describes how those principles are applied at the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia.

  9. How to Obtain Forty Percent Less Environmental Impact by Healthy, Protein-Optimized Snacks for Older Adults.

    PubMed

    Saxe, Henrik; Loftager Okkels, Signe; Jensen, Jørgen Dejgård

    2017-12-06

    It is well known that meals containing less meat are more sustainable, but little is known about snack-meals, which typically do not contain meat. This study investigates the diversity in environmental impacts associated with snack production based on 20 common recipes optimized for protein content, energy content and sensory aspects for older adults. The purpose is to improve sustainability of public procurement by serving more sustainable snack-meals. Public procurement serves Danish older adults over millions of snack-meals every year, and millions more are served in countries with a similar social service. The environmental impact of snack production was estimated by consequential life cycle assessment. The average impact of producing the 10 least environmentally harmful snacks was 40% less than the average impact of producing the 10 most harmful snacks. This is true whether the functional unit was mass, energy, or protein content, and whether the environmental impact was measured as global warming potential or the monetized value of 16 impact categories. We conclude that large-scale public procurement of snack-meals by private and municipal kitchens can be reduced by up to 40% if the kitchens evaluate the environmental impact of all their snacks and serve the better half more frequently.

  10. How to Obtain Forty Percent Less Environmental Impact by Healthy, Protein-Optimized Snacks for Older Adults

    PubMed Central

    Loftager Okkels, Signe; Jensen, Jørgen Dejgård

    2017-01-01

    It is well known that meals containing less meat are more sustainable, but little is known about snack-meals, which typically do not contain meat. This study investigates the diversity in environmental impacts associated with snack production based on 20 common recipes optimized for protein content, energy content and sensory aspects for older adults. The purpose is to improve sustainability of public procurement by serving more sustainable snack-meals. Public procurement serves Danish older adults over millions of snack-meals every year, and millions more are served in countries with a similar social service. The environmental impact of snack production was estimated by consequential life cycle assessment. The average impact of producing the 10 least environmentally harmful snacks was 40% less than the average impact of producing the 10 most harmful snacks. This is true whether the functional unit was mass, energy, or protein content, and whether the environmental impact was measured as global warming potential or the monetized value of 16 impact categories. We conclude that large-scale public procurement of snack-meals by private and municipal kitchens can be reduced by up to 40% if the kitchens evaluate the environmental impact of all their snacks and serve the better half more frequently. PMID:29211041

  11. Optimal motion planning for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the minimum distance between the robot and the object is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. A perturbation control type of approach is used to update the optimal plan. Simulation results verify the value of the proposed strategy.

  12. Optimization of EB plant by constraint control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummel, H.K.; de Wit, G.B.C.; Maarleveld, A.

    1991-03-01

    Optimum plant operation can often be achieved by means of constraint control instead of model- based on-line optimization. This is because optimum operation is seldom at the top of the hill but usually at the intersection of constraints. This article describes the development of a constraint control system for a plant producing ethylbenzene (EB) by the Mobil/Badger Ethylbenzene Process. Plant optimization can be defined as the maximization of a profit function describing the economics of the plant. This function contains terms with product values, feedstock prices and operational costs. Maximization of the profit function can be obtained by varying relevantmore » degrees of freedom in the plant, such as a column operating pressure or a reactor temperature. These degrees of freedom can be varied within the available operating margins of the plant.« less

  13. Optimization of MR fluid Yield stress using Taguchi Method and Response Surface Methodology Techniques

    NASA Astrophysics Data System (ADS)

    Mangal, S. K.; Sharma, Vivek

    2018-02-01

    Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.

  14. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  15. Value recovery with harvesters in southeastern USA pine stands

    Treesearch

    Ian P. Conradie; W. Dale Greene; Glen E. Murphy

    2003-01-01

    Cut-to-Iength is not the harvesting system of choice in the southeastern USA although it is perceived to be more environmentally friendly and to have the ability to recover more value from cut stems. In this paper we address the value recovery aspect of harvesters by comparing the optimal recoverable value, as calculated by optimization software, to the actual value...

  16. (Too) optimistic about optimism: the belief that optimism improves performance.

    PubMed

    Tenney, Elizabeth R; Logg, Jennifer M; Moore, Don A

    2015-03-01

    A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person's actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success-unfortunately, people may be overly optimistic about just how much optimism can do. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  17. A systematic review of the angular values obtained by computerized photogrammetry in sagittal plane: a proposal for reference values.

    PubMed

    Krawczky, Bruna; Pacheco, Antonio G; Mainenti, Míriam R M

    2014-05-01

    Reference values for postural alignment in the coronal plane, as measured by computerized photogrammetry, have been established but not for the sagittal plane. The objective of this study is to propose reference values for angular measurements used for postural analysis in the sagittal plane for healthy adults. Electronic databases (PubMed, BVS, Cochrane, Scielo, and Science Direct) were searched using the following key words: evaluation, posture, photogrammetry, and software. Articles published between 2006 and 2012 that used the PAS/SAPO (postural assessment software) were selected. Another inclusion criterion was the presentation of, at least, one of the following measurements: head horizontal alignment, pelvic horizontal alignment, hip angle, vertical alignment of the body, thoracic kyphosis, and lumbar lordosis. Angle samples of the selected articles were grouped 2 by 2 in relation to an overall average, which made possible total average, variance, and SD calculations. Six articles were included, and the following average angular values were found: 51.42° ± 4.87° (head horizontal alignment), -12.26° ± 5.81° (pelvic horizontal alignment), -6.40° ± 3.86° (hip angle), and 1.73° ± 0.94° (vertical alignment of the body). None of the articles contained the measurements for thoracic kyphosis and lumbar lordosis. The reference values can be adopted as reference for postural assessment in future researches if the same anatomical points are considered. Copyright © 2014 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  18. Optimal solutions for a bio mathematical model for the evolution of smoking habit

    NASA Astrophysics Data System (ADS)

    Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef

    In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.

  19. Action of multi-enzyme complex on protein extraction to obtain a protein concentrate from okara.

    PubMed

    de Figueiredo, Vitória Ribeiro Garcia; Yamashita, Fábio; Vanzela, André Luis Laforga; Ida, Elza Iouko; Kurozawa, Louise Emy

    2018-04-01

    The objective of this study was to optimize the extraction of protein by applying a multi-enzymatic pretreatment to okara, a byproduct from soymilk processing. The multi-enzyme complex Viscozyme, containing a variety of carbohydrases, was used to hydrolyze the okara cell walls and facilitate extraction of proteins. Enzyme-assisted extraction was carried out under different temperatures (37-53 °C), enzyme concentrations (1.5-4%) and pH values (5.5-6.5) according to a central composite rotatable design. After extraction, the protein was concentrated by isoelectric precipitation. The optimal conditions for maximum protein content and recovery in protein concentrate were 53 °C, pH 6.2 and 4% of enzyme concentration. Under these conditions, protein content of 56% (dry weight basis) and a recovery of 28% were obtained, representing an increase of 17 and 86%, respectively, compared to the sample with no enzymatic pretreatment. The multi-enzyme complex Viscozyme hydrolyzed the structural cell wall polysaccharides, improving extraction and obtaining protein concentrate from the okara. An electrophoretic profile of the protein concentrate showed two distinct bands, corresponding to the acidic and basic subunits of the protein glycinin. There were no limiting amino acids in the protein concentrate, which had a greater content of arginine.

  20. Optimal Waist-to-Height Ratio Values for Cardiometabolic Risk Screening in an Ethnically Diverse Sample of South African Urban and Rural School Boys and Girls

    PubMed Central

    Matsha, Tandi E.; Kengne, Andre-Pascal; Yako, Yandiswa Y.; Hon, Gloudina M.; Hassan, Mogamat S.; Erasmus, Rajiv T.

    2013-01-01

    Background The proposed waist-to-height ratio (WHtR) cut-off of 0.5 is less optimal for cardiometabolic risk screening in children in many settings. The purpose of this study was to determine the optimal WHtR for children from South Africa, and investigate variations by gender, ethnicity and residence in the achieved value. Methods Metabolic syndrome (MetS) components were measured in 1272 randomly selected learners, aged 10–16 years, comprising of 446 black Africans, 696 mixed-ancestry and 130 Caucasians. The Youden’s index and the closest-top-left (CTL) point approaches were used to derive WHtR cut-offs for diagnosing any two MetS components, excluding the waist circumference. Results The two approaches yielded similar cut-off in girls, 0.465 (sensitivity 50.0, specificity 69.5), but two different values in boys, 0.455 (42.9, 88.4) and 0.425 (60.3, 67.7) based on the Youden’s index and the CTL point, respectively. Furthermore, WHtR cut-off values derived differed substantially amongst the regions and ethnic groups investigated, whereby the highest cut-off was observed in semi-rural and white children, respectively, Youden’s index0.505 (31.6, 87.1) and CTL point 0.475 (44.4, 75.9). Conclusion The WHtR cut-off of 0.5 is less accurate for screening cardiovascular risk in South African children. The optimal value in this setting is likely gender and ethnicity-specific and sensitive to urbanization. PMID:23967160

  1. Enhanced genetic algorithm optimization model for a single reservoir operation based on hydropower generation: case study of Mosul reservoir, northern Iraq.

    PubMed

    Al-Aqeeli, Yousif H; Lee, T S; Abd Aziz, S

    2016-01-01

    Achievement of the optimal hydropower generation from operation of water reservoirs, is a complex problems. The purpose of this study was to formulate and improve an approach of a genetic algorithm optimization model (GAOM) in order to increase the maximization of annual hydropower generation for a single reservoir. For this purpose, two simulation algorithms were drafted and applied independently in that GAOM during 20 scenarios (years) for operation of Mosul reservoir, northern Iraq. The first algorithm was based on the traditional simulation of reservoir operation, whilst the second algorithm (Salg) enhanced the GAOM by changing the population values of GA through a new simulation process of reservoir operation. The performances of these two algorithms were evaluated through the comparison of their optimal values of annual hydropower generation during the 20 scenarios of operating. The GAOM achieved an increase in hydropower generation in 17 scenarios using these two algorithms, with the Salg being superior in all scenarios. All of these were done prior adding the evaporation (Ev) and precipitation (Pr) to the water balance equation. Next, the GAOM using the Salg was applied by taking into consideration the volumes of these two parameters. In this case, the optimal values obtained from the GAOM were compared, firstly with their counterpart that found using the same algorithm without taking into consideration of Ev and Pr, secondly with the observed values. The first comparison showed that the optimal values obtained in this case decreased in all scenarios, whilst maintaining the good results compared with the observed in the second comparison. The results proved the effectiveness of the Salg in increasing the hydropower generation through the enhanced approach of the GAOM. In addition, the results indicated to the importance of taking into account the Ev and Pr in the modelling of reservoirs operation.

  2. Parameter identification and optimization of slide guide joint of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Sun, B. B.

    2017-11-01

    The joint surface has an important influence on the performance of CNC machine tools. In order to identify the dynamic parameters of slide guide joint, the parametric finite element model of the joint is established and optimum design method is used based on the finite element simulation and modal test. Then the mode that has the most influence on the dynamics of slip joint is found through harmonic response analysis. Take the frequency of this mode as objective, the sensitivity analysis of the stiffness of each joint surface is carried out using Latin Hypercube Sampling and Monte Carlo Simulation. The result shows that the vertical stiffness of slip joint surface constituted by the bed and the slide plate has the most obvious influence on the structure. Therefore, this stiffness is taken as the optimization variable and the optimal value is obtained through studying the relationship between structural dynamic performance and stiffness. Take the stiffness values before and after optimization into the FEM of machine tool, and it is found that the dynamic performance of the machine tool is improved.

  3. CNV detection method optimized for high-resolution arrayCGH by normality test.

    PubMed

    Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun

    2012-04-01

    High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Driving personalized medicine: capturing maximum net present value and optimal return on investment.

    PubMed

    Roth, Mollie; Keeling, Peter; Smart, Dave

    2010-01-01

    In order for personalized medicine to meet its potential future promise, a closer focus on the work being carried out today and the foundation it will provide for that future is imperative. While big picture perspectives of this still nascent shift in the drug-development process are important, it is more important that today's work on the first wave of targeted therapies is used to build specific benchmarking and financial models against which further such therapies may be more effectively developed. Today's drug-development teams need a robust tool to identify the exact drivers that will ensure the successful launch and rapid adoption of targeted therapies, and financial metrics to determine the appropriate resource levels to power those drivers. This special report will describe one such benchmarking and financial model that is specifically designed for the personalized medicine field and will explain how the use of this or similar models can help to capture the maximum net present value of targeted therapies and help to realize optimal return on investment.

  5. High power tapered lasers with optimized photonic crystal structure for low divergence and high efficiency

    NASA Astrophysics Data System (ADS)

    Ma, Xiaolong; Qu, Hongwei; Qi, Aiyi; Zhou, Xuyan; Ma, Pijie; Liu, Anjin; Zheng, Wanhua

    2018-04-01

    High power tapered lasers are designed and fabricated. A one-dimensional photonic crystal structure in the vertical direction is adopted to narrow the far field divergence. The thickness of the defect layer and the photonic crystal layers are optimized by analyzing the optical field theoretically. For tapered lasers, the continuous-wave power is 7.3 W and the pulsed power is 17 W. A maximum wall-plug efficiency of 46% under continuous-wave operation and 49.3% in pulsed mode are obtained. The beam divergences are around 11° and 6° for the vertical and lateral directions, respectively. High beam qualities are also obtained with a vertical M2 value of 1.78 and a lateral M2 value of 1.62. As the current increases, the lateral M2 value increases gradually while the vertical M2 value remains around 2.

  6. Optimization of diesel engine performance by the Bees Algorithm

    NASA Astrophysics Data System (ADS)

    Azfanizam Ahmad, Siti; Sunthiram, Devaraj

    2018-03-01

    Biodiesel recently has been receiving a great attention in the world market due to the depletion of the existing fossil fuels. Biodiesel also becomes an alternative for diesel No. 2 fuel which possesses characteristics such as biodegradable and oxygenated. However, there are facts suggested that biodiesel does not have the equivalent features as diesel No. 2 fuel as it has been claimed that the usage of biodiesel giving increment in the brake specific fuel consumption (BSFC). The objective of this study is to find the maximum brake power and brake torque as well as the minimum BSFC to optimize the condition of diesel engine when using the biodiesel fuel. This optimization was conducted using the Bees Algorithm (BA) under specific biodiesel percentage in fuel mixture, engine speed and engine load. The result showed that 58.33kW of brake power, 310.33 N.m of brake torque and 200.29/(kW.h) of BSFC were the optimum value. Comparing to the ones obtained by other algorithm, the BA produced a fine brake power and a better brake torque and BSFC. This finding proved that the BA can be used to optimize the performance of diesel engine based on the optimum value of the brake power, brake torque and BSFC.

  7. Towards a globally optimized crop distribution: Integrating water use, nutrition, and economic value

    NASA Astrophysics Data System (ADS)

    Davis, K. F.; Seveso, A.; Rulli, M. C.; D'Odorico, P.

    2016-12-01

    Human demand for crop production is expected to increase substantially in the coming decades as a result of population growth, richer diets and biofuel use. In order for food production to keep pace, unprecedented amounts of resources - water, fertilizers, energy - will be required. This has led to calls for `sustainable intensification' in which yields are increased on existing croplands while seeking to minimize impacts on water and other agricultural resources. Recent studies have quantified aspects of this, showing that there is a large potential to improve crop yields and increase harvest frequencies to better meet human demand. Though promising, both solutions would necessitate large additional inputs of water and fertilizer in order to be achieved under current technologies. However, the question of whether the current distribution of crops is, in fact, the best for realizing sustainable production has not been considered to date. To this end, we ask: Is it possible to increase crop production and economic value while minimizing water demand by simply growing crops where soil and climate conditions are best suited? Here we use maps of yields and evapotranspiration for 14 major food crops to identify differences between current crop distributions and where they can most suitably be planted. By redistributing crops across currently cultivated lands, we determine the potential improvements in calorie (+12%) and protein (+51%) production, economic output (+41%) and water demand (-5%). This approach can also incorporate the impact of future climate on cropland suitability, and as such, be used to provide optimized cropping patterns under climate change. Thus, our study provides a novel tool towards achieving sustainable intensification that can be used to recommend optimal crop distributions in the face of a changing climate while simultaneously accounting for food security, freshwater resources, and livelihoods.

  8. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  9. Design Optimization of Irregular Cellular Structure for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Song, Guo-Hua; Jing, Shi-Kai; Zhao, Fang-Lei; Wang, Ye-Dong; Xing, Hao; Zhou, Jing-Tao

    2017-09-01

    Irregularcellular structurehas great potential to be considered in light-weight design field. However, the research on optimizing irregular cellular structures has not yet been reporteddue to the difficulties in their modeling technology. Based on the variable density topology optimization theory, an efficient method for optimizing the topology of irregular cellular structures fabricated through additive manufacturing processes is proposed. The proposed method utilizes tangent circles to automatically generate the main outline of irregular cellular structure. The topological layoutof each cellstructure is optimized using the relative density informationobtained from the proposed modified SIMP method. A mapping relationship between cell structure and relative densityelement is builtto determine the diameter of each cell structure. The results show that the irregular cellular structure can be optimized with the proposed method. The results of simulation and experimental test are similar for irregular cellular structure, which indicate that the maximum deformation value obtained using the modified Solid Isotropic Microstructures with Penalization (SIMP) approach is lower 5.4×10-5 mm than that using the SIMP approach under the same under the same external load. The proposed research provides the instruction to design the other irregular cellular structure.

  10. Tuning of PID controller using optimization techniques for a MIMO process

    NASA Astrophysics Data System (ADS)

    Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.

    2017-11-01

    In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.

  11. Optimization of Darrieus turbines with an upwind and downwind momentum model

    NASA Astrophysics Data System (ADS)

    Loth, J. L.; McCoy, H.

    1983-08-01

    This paper presents a theoretical aerodynamic performance optimization for two dimensional vertical axis wind turbines. A momentum type wake model is introduced with separate cosine type interference coefficients for the up and downwind half of the rotor. The cosine type loading permits the rotor blades to become unloaded near the junction of the upwind and downwind rotor halves. Both the optimum and the off design magnitude of the interference coefficients are obtained by equating the drag on each of the rotor halves to that on each of two cosine loaded actuator discs in series. The values for the optimum rotor efficiency, solidity and corresponding interference coefficients have been obtained in a closed form analytic solution by maximizing the power extracted from the downwind rotor half as well as from the entire rotor. A numerical solution was required when viscous effects were incorporated in the rotor optimization.

  12. Value recovery from two mechanized bucking operations in the southeastern United States

    Treesearch

    Kevin Boston; Glen. Murphy

    2003-01-01

    The value recovered from two mechanized bucking operations in the southeastern United States was compared with the optimal value computed using an individual-stem log optimization program, AVIS. The first operation recovered 94% of the optimal value. The main cause for the value loss was a failure to capture potential sawlog volume; logs were bucked to a larger average...

  13. An optimal control strategy for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.

  14. Optimal Growth in Hypersonic Boundary Layers

    NASA Technical Reports Server (NTRS)

    Paredes, Pedro; Choudhari, Meelan M.; Li, Fei; Chang, Chau-Lyan

    2016-01-01

    The linear form of the parabolized linear stability equations is used in a variational approach to extend the previous body of results for the optimal, nonmodal disturbance growth in boundary-layer flows. This paper investigates the optimal growth characteristics in the hypersonic Mach number regime without any high-enthalpy effects. The influence of wall cooling is studied, with particular emphasis on the role of the initial disturbance location and the value of the spanwise wave number that leads to the maximum energy growth up to a specified location. Unlike previous predictions that used a basic state obtained from a self-similar solution to the boundary-layer equations, mean flow solutions based on the full Navier-Stokes equations are used in select cases to help account for the viscous- inviscid interaction near the leading edge of the plate and for the weak shock wave emanating from that region. Using the full Navier-Stokes mean flow is shown to result in further reduction with Mach number in the magnitude of optimal growth relative to the predictions based on the self-similar approximation to the base flow.

  15. Globally optimal trial design for local decision making.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2009-02-01

    Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.

  16. The value of pathogen information in treating clinical mastitis.

    PubMed

    Cha, Elva; Smith, Rebecca L; Kristensen, Anders R; Hertl, Julia A; Schukken, Ynte H; Tauer, Loren W; Welcome, Frank L; Gröhn, Yrjö T

    2016-11-01

    The objective of this study was to determine the economic value of obtaining timely and more accurate clinical mastitis (CM) test results for optimal treatment of cows. Typically CM is first identified when the farmer observes recognisable outward signs. Further information of whether the pathogen causing CM is Gram-positive, Gram-negative or other (including no growth) can be determined by using on-farm culture methods. The most detailed level of information for mastitis diagnostics is obtainable by sending milk samples for culture to an external laboratory. Knowing the exact pathogen permits the treatment method to be specifically targeted to the causation pathogen, resulting in less discarded milk. The disadvantages are the additional waiting time to receive test results, which delays treating cows, and the cost of the culture test. Net returns per year (NR) for various levels of information were estimated using a dynamic programming model. The Value of Information (VOI) was then calculated as the difference in NR using a specific level of information as compared to more detailed information on the CM causative agent. The highest VOI was observed where the farmer assumed the pathogen causing CM was the one with the highest incidence in the herd and no pathogen specific CM information was obtained. The VOI of pathogen specific information, compared with non-optimal treatment of Staphylococcus aureus where recurrence and spread occurred due to lack of treatment efficacy, was $20.43 when the same incorrect treatment was applied to recurrent cases, and $30.52 when recurrent cases were assumed to be the next highest incidence pathogen and treated accordingly. This indicates that negative consequences associated with choosing the wrong CM treatment can make additional information cost-effective if pathogen identification is assessed at the generic information level and if the pathogen can spread to other cows if not treated appropriately.

  17. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-15

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less

  18. Harmony search optimization for HDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Panchal, Aditya

    In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was

  19. Biodegradability and toxicity assessment of a real textile wastewater effluent treated by an optimized electrocoagulation process.

    PubMed

    Manenti, Diego R; Módenes, Aparecido N; Soares, Petrick A; Boaventura, Rui A R; Palácio, Soraya M; Borba, Fernando H; Espinoza-Quiñones, Fernando R; Bergamasco, Rosângela; Vilar, Vítor J P

    2015-01-01

    In this work, the application of an iron electrode-based electrocoagulation (EC) process on the treatment of a real textile wastewater (RTW) was investigated. In order to perform an efficient integration of the EC process with a biological oxidation one, an enhancement in the biodegradability and low toxicity of final compounds was sought. Optimal values of EC reactor operation parameters (pH, current density and electrolysis time) were achieved by applying a full factorial 3(3) experimental design. Biodegradability and toxicity assays were performed on treated RTW samples obtained at the optimal values of: pH of the solution (7.0), current density (142.9 A m(-2)) and different electrolysis times. As response variables for the biodegradability and toxicity assessment, the Zahn-Wellens test (Dt), the ratio values of dissolved organic carbon (DOC) relative to low-molecular-weight carboxylates anions (LMCA) and lethal concentration 50 (LC50) were used. According to the Dt, the DOC/LMCA ratio and LC50, an electrolysis time of 15 min along with the optimal values of pH and current density were suggested as suitable for a next stage of treatment based on a biological oxidation process.

  20. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  1. Valuing inter-sectoral costs and benefits of interventions in the healthcare sector: methods for obtaining unit prices.

    PubMed

    Drost, Ruben M W A; Paulus, Aggie T G; Ruwaard, Dirk; Evers, Silvia M A A

    2017-02-01

    There is a lack of knowledge about methods for valuing health intervention-related costs and monetary benefits in the education and criminal justice sectors, also known as 'inter-sectoral costs and benefits' (ICBs). The objective of this study was to develop methods for obtaining unit prices for the valuation of ICBs. By conducting an exploratory literature study and expert interviews, several generic methods were developed. The methods' feasibility was assessed through application in the Netherlands. Results were validated in an expert meeting, which was attended by policy makers, public health experts, health economists and HTA-experts, and discussed at several international conferences and symposia. The study resulted in four methods, including the opportunity cost method (A) and valuation using available unit prices (B), self-constructed unit prices (C) or hourly labor costs (D). The methods developed can be used internationally and are valuable for the broad international field of HTA.

  2. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  3. Can linear superiorization be useful for linear optimization problems?

    NASA Astrophysics Data System (ADS)

    Censor, Yair

    2017-04-01

    Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.

  4. Can Linear Superiorization Be Useful for Linear Optimization Problems?

    PubMed Central

    Censor, Yair

    2017-01-01

    Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660

  5. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    NASA Astrophysics Data System (ADS)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  6. Optimization of Fish Protection System to Increase Technosphere Safety

    NASA Astrophysics Data System (ADS)

    Khetsuriani, E. D.; Fesenko, L. N.; Larin, D. S.

    2017-11-01

    The article is concerned with field study data. Drawing upon prior information and considering structural features of fish protection devices, we decided to conduct experimental research while changing three parameters: process pressure PCT, stream velocity Vp and washer nozzle inclination angle αc. The variability intervals of examined factors are shown in the Table 1. The conicity angle was assumed as a constant one. The box design B3 was chosen as a baseline being close to D-optimal designs in its statistical characteristics. The number of device rotations and its fish fry protection efficiency were accepted as the output functions of optimization. The numerical values of regression coefficients of quadratic equations describing the behavior of optimization functions Y1 and Y2 and their formulaic errors were calculated upon the test results in accordance with the planning matrix. The adequacy or inadequacy of the obtained quadratic regression model is judged via checking the condition whether Fexp ≤ Ftheor.

  7. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    NASA Astrophysics Data System (ADS)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  8. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  9. Optimization of fuels from waste composition with application of genetic algorithm.

    PubMed

    Małgorzata, Wzorek

    2014-05-01

    The objective of this article is to elaborate a method to optimize the composition of the fuels from sewage sludge (PBS fuel - fuel based on sewage sludge and coal slime, PBM fuel - fuel based on sewage sludge and meat and bone meal, PBT fuel - fuel based on sewage sludge and sawdust). As a tool for an optimization procedure, the use of a genetic algorithm is proposed. The optimization task involves the maximization of mass fraction of sewage sludge in a fuel developed on the basis of quality-based criteria for the use as an alternative fuel used by the cement industry. The selection criteria of fuels composition concerned such parameters as: calorific value, content of chlorine, sulphur and heavy metals. Mathematical descriptions of fuel compositions and general forms of the genetic algorithm, as well as the obtained optimization results are presented. The results of this study indicate that the proposed genetic algorithm offers an optimization tool, which could be useful in the determination of the composition of fuels that are produced from waste.

  10. Cost Optimization of Water Resources in Pernambuco, Brazil: Valuing Future Infrastructure and Climate Forecasts

    NASA Astrophysics Data System (ADS)

    Kumar, Ipsita; Josset, Laureline; Lall, Upmanu; Cavalcanti e Silva, Erik; Cordeiro Possas, José Marcelo; Cauás Asfora, Marcelo

    2017-04-01

    Optimal management of water resources is paramount in semi-arid regions to limit strains on the society and economy due to limited water availability. This problem is likely to become even more recurrent as droughts are projected to intensify in the coming years, causing increasing stresses to the water supply in the concerned areas. The state of Pernambuco, in the Northeast Brazil is one such case, where one of the largest reservoir, Jucazinho, has been at approximately 1% capacity throughout 2016, making infrastructural challenges in the region very real. To ease some of the infrastructural stresses and reduce vulnerabilities of the water system, a new source of water from Rio São Francisco is currently under development. Till its development, water trucks have been regularly mandated to cover water deficits, but at a much higher cost, thus endangering the financial sustainability of the region. In this paper, we propose to evaluate the sustainability of the considered water system by formulating an optimization problem and determine the optimal operations to be conducted. We start with a comparative study of the current and future infrastructures capabilities to face various climate. We show that while the Rio Sao Francisco project mitigates the problems, both implementations do not prevent failure and require the reliance on water trucks during prolonged droughts. We also study the cost associated with the provision of water to the municipalities for several streamflow forecasts. In particular, we investigate the value of climate predictions to adapt operational decisions by comparing the results with a fixed policy derived from historical data. We show that the use of climate information permits the reduction of the water deficit and reduces overall operational costs. We conclude with a discussion on the potential of the approach to evaluate future infrastructure developments. This study is funded by the Inter-American Development Bank (IADB), and in

  11. Optimizing the parameters of the Lyman-Kutcher-Burman, Källman, and Logit+EUD models for the rectum - a comparison between normal tissue complication probability and clinical data

    NASA Astrophysics Data System (ADS)

    Trojková, Darina; Judas, Libor; Trojek, Tomáš

    2014-11-01

    Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.

  12. A discrete choice experiment to obtain a tariff for valuing informal care situations measured with the CarerQol instrument.

    PubMed

    Hoefman, Renske J; van Exel, Job; Rose, John M; van de Wetering, E J; Brouwer, Werner B F

    2014-01-01

    Economic evaluations adopting a societal perspective need to include informal care whenever relevant. However, in practice, informal care is often neglected, because there are few validated instruments to measure and value informal care for inclusion in economic evaluations. The CarerQol, which is such an instrument, measures the impact of informal care on 7 important burden dimensions (CarerQol-7D) and values this in terms of general quality of life (CarerQol-VAS). The objective of the study was to calculate utility scores based on relative utility weights for the CarerQol-7D. These tariffs will facilitate inclusion of informal care in economic evaluations. The CarerQol-7D tariff was derived with a discrete choice experiment conducted as an Internet survey among the general adult population in the Netherlands (N = 992). The choice set contained 2 unlabeled alternatives described in terms of the 7 CarerQol-7D dimensions (level range: "no,"some," and "a lot"). An efficient experimental design with priors obtained from a pilot study (N = 104) was used. Data were analyzed with a panel mixed multinomial parameter model including main and interaction effects of the attributes. The utility attached to informal care situations was significantly higher when this situation was more attractive in terms of fewer problems and more fulfillment or support. The interaction term between the CarerQol-7D dimensions physical health and mental health problems also significantly explained this utility. The tariff was constructed by adding up the relative utility weights per category of all CarerQol-7D dimensions and the interaction term. We obtained a tariff providing standard utility scores for caring situations described with the CarerQol-7D. This facilitates the inclusion of informal care in economic evaluations.

  13. Multi objective genetic algorithm to optimize the local heat treatment of a hardenable aluminum alloy

    NASA Astrophysics Data System (ADS)

    Piccininni, A.; Palumbo, G.; Franco, A. Lo; Sorgente, D.; Tricarico, L.; Russello, G.

    2018-05-01

    The continuous research for lightweight components for transport applications to reduce the harmful emissions drives the attention to the light alloys as in the case of Aluminium (Al) alloys, capable to combine low density with high values of the strength-to-weight ratio. Such advantages are partially counterbalanced by the poor formability at room temperature. A viable solution is to adopt a localized heat treatment by laser of the blank before the forming process to obtain a tailored distribution of material properties so that the blank can be formed at room temperature by means of conventional press machines. Such an approach has been extensively investigated for age hardenable alloys, but in the present work the attention is focused on the 5000 series; in particular, the optimization of the deep drawing process of the alloy AA5754 H32 is proposed through a numerical/experimental approach. A preliminary investigation was necessary to correctly tune the laser parameters (focus length, spot dimension) to effectively obtain the annealed state. Optimal process parameters were then obtained coupling a 2D FE model with an optimization platform managed by a multi-objective genetic algorithm. The optimal solution (i.e. able to maximize the LDR) in terms of blankholder force and extent of the annealed region was thus evaluated and validated through experimental trials. A good matching between experimental and numerical results was found. The optimal solution allowed to obtain an LDR of the locally heat treated blank larger than the one of the material either in the wrought condition (H32) either in the annealed condition (H111).

  14. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations

  15. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization.

    PubMed

    Nishio, Mizuho; Nishizawa, Mitsuo; Sugiyama, Osamu; Kojima, Ryosuke; Yakami, Masahiro; Kuroda, Tomohiro; Togashi, Kaori

    2018-01-01

    We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.

  16. Optimization of the Liquid Culture Medium Composition to Obtain the Mycelium of Agaricus bisporus Rich in Essential Minerals.

    PubMed

    Krakowska, Agata; Reczyński, Witold; Muszyńska, Bożena

    2016-09-01

    Agaricus bisporus species (J.E. Lange) Imbach one of the most popular Basidiomycota species was chosen for the research because of its dietary and medicinal value. The presented herein studies included determination of essential mineral accumulation level in the mycelium of A. bisporus, cultivated on liquid cultures in the medium supplemented with addition of the chosen metals' salts. Quantitative analyses of Zn, Cu, Mg, and Fe in liquid cultures made it possible to determine the relationship between accumulation of the selected mineral in A. bisporus mycelium and the culture conditions. Monitoring of the liquid cultures and determination of the elements' concentrations in mycelium of A. bisporus were performed using the flame technique of AAS method. Concentration of Zn in the mycelium, maintained in the medium with the addition of its salt, was in a very wide range from 95.9 to 4462.0 mg/g DW. In the analyzed A. bisporus mycelium, cultured in the medium enriched with copper salt, this metal concentration changed from 89.79 to 7491.50 mg/g DW; considering Mg in liquid cultured mycelium (medium with Mg addition), its concentration has changed from 0.32 to 10.55 mg/g DW. The medium enriched with iron salts has led to bioaccumulation of Fe in mycelia of A. bisporus. Determined Fe concentration was in the range from 0.62 to 161.28 mg/g DW. The proposed method of liquid A. bisporus culturing on medium enriched with the selected macro- and microelements in proper concentrations ratio have led to obtaining maximal growth of biomass, characterized by high efficiency of the mineral accumulation. As a result, a dietary component of increased nutritive value was obtained.

  17. Optimization of Enzymatic Saccharification of Alkali Pretreated Parthenium sp. Using Response Surface Methodology

    PubMed Central

    Pandiyan, K.; Tiwari, Rameshwar; Singh, Surender; Nain, Pawan K. S.; Rana, Sarika; Arora, Anju; Singh, Shashi B.; Nain, Lata

    2014-01-01

    Parthenium sp. is a noxious weed which threatens the environment and biodiversity due to its rapid invasion. This lignocellulosic weed was investigated for its potential in biofuel production by subjecting it to mild alkali pretreatment followed by enzymatic saccharification which resulted in significant amount of fermentable sugar yield (76.6%). Optimization of enzymatic hydrolysis variables such as temperature, pH, enzyme, and substrate loading was carried out using central composite design (CCD) in response to surface methodology (RSM) to achieve the maximum saccharification yield. Data obtained from RSM was validated using ANOVA. After the optimization process, a model was proposed with predicted value of 80.08% saccharification yield under optimum conditions which was confirmed by the experimental value of 85.80%. This illustrated a good agreement between predicted and experimental response (saccharification yield). The saccharification yield was enhanced by enzyme loading and reduced by temperature and substrate loading. This study reveals that under optimized condition, sugar yield was significantly increased which was higher than earlier reports and promises the use of Parthenium sp. biomass as a feedstock for bioethanol production. PMID:24900917

  18. Conditional nonlinear optimal perturbations based on the particle swarm optimization and their applications to the predictability problems

    NASA Astrophysics Data System (ADS)

    Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun

    2017-02-01

    In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the

  19. Unipolar Endocardial Voltage Mapping in the Right Ventricle: Optimal Cutoff Values Correcting for Computed Tomography-Derived Epicardial Fat Thickness and Their Clinical Value for Substrate Delineation.

    PubMed

    Venlet, Jeroen; Piers, Sebastiaan R D; Kapel, Gijsbert F L; de Riva, Marta; Pauli, Philippe F G; van der Geest, Rob J; Zeppenfeld, Katja

    2017-08-01

    Low endocardial unipolar voltage (UV) at sites with normal bipolar voltage (BV) may indicate epicardial scar. Currently applied UV cutoff values are based on studies that lacked epicardial fat information. This study aimed to define endocardial UV cutoff values using computed tomography-derived fat information and to analyze their clinical value for right ventricular substrate delineation. Thirty-three patients (50±14 years; 79% men) underwent combined endocardial-epicardial right ventricular electroanatomical mapping and ablation of right ventricular scar-related ventricular tachycardia with computed tomographic image integration, including computed tomography-derived fat thickness. Of 6889 endocardial-epicardial mapping point pairs, 547 (8%) pairs with distance <10 mm and fat thickness <1.0 mm were analyzed for voltage and abnormal (fragmented/late potential) electrogram characteristics. At sites with endocardial BV >1.50 mV, the optimal endocardial UV cutoff for identification of epicardial BV <1.50 mV was 3.9 mV (area under the curve, 0.75; sensitivity, 60%; specificity, 79%) and cutoff for identification of abnormal epicardial electrogram was 3.7 mV (area under the curve, 0.88; sensitivity, 100%; specificity, 67%). The majority of abnormal electrograms (130 of 151) were associated with transmural scar. Eighty-six percent of abnormal epicardial electrograms had corresponding endocardial sites with BV <1.50 mV, and the remaining could be identified by corresponding low endocardial UV <3.7 mV. For identification of epicardial right ventricular scar, an endocardial UV cutoff value of 3.9 mV is more accurate than previously reported cutoff values. Although the majority of epicardial abnormal electrograms are associated with transmural scar with low endocardial BV, the additional use of endocardial UV at normal BV sites improves the diagnostic accuracy resulting in identification of all epicardial abnormal electrograms at sites with <1.0 mm fat. © 2017 American

  20. Application of multi-objective controller to optimal tuning of PID gains for a hydraulic turbine regulating system using adaptive grid particle swam optimization.

    PubMed

    Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu

    2015-05-01

    A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    NASA Astrophysics Data System (ADS)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  2. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    NASA Astrophysics Data System (ADS)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  3. Optimization algorithms for large-scale multireservoir hydropower systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiew, K.L.

    Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less

  4. Optimal quantum observables

    NASA Astrophysics Data System (ADS)

    Haapasalo, Erkka; Pellonpää, Juha-Pekka

    2017-12-01

    Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.

  5. An algorithm for the optimal collection of wet waste.

    PubMed

    Laureri, Federica; Minciardi, Riccardo; Robba, Michela

    2016-02-01

    This work refers to the development of an approach for planning wet waste (food waste and other) collection at a metropolitan scale. Some specific modeling features distinguish this specific waste collection problem from the other ones. For instance, there may be significant differences as regards the values of the parameters (such as weight and volume) characterizing the various collection points. As it happens for classical waste collection planning, even in the case of wet waste, one has to deal with difficult combinatorial problems, where the determination of an optimal solution may require a very large computational effort, in the case of problem instances having a noticeable dimensionality. For this reason, in this work, a heuristic procedure for the optimal planning of wet waste is developed and applied to problem instances drawn from a real case study. The performances that can be obtained by applying such a procedure are evaluated by a comparison with those obtainable via a general-purpose mathematical programming software package, as well as those obtained by applying very simple decision rules commonly used in practice. The considered case study consists in an area corresponding to the historical center of the Municipality of Genoa. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  7. Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.

  8. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  9. Obtaining the Iodine Value of Various Oils via Bromination with Pyridinium Tribromide

    ERIC Educational Resources Information Center

    Simurdiak, Michael; Olukoga, Olushola; Hedberg, Kirk

    2016-01-01

    A laboratory exercise was devised that allows students to rapidly and fairly accurately determine the iodine value of oleic acid. This method utilizes the addition of elemental bromine to the unsaturated bonds in oleic acid, due to bromine's relatively fast reaction rate compared to that of the traditional Wijs solution method. This method also…

  10. Optimization of dilute acid pretreatment of water hyacinth biomass for enzymatic hydrolysis and ethanol production

    PubMed Central

    Idrees, Muhammad; Adnan, Ahmad; Sheikh, Shahzad; Qureshic, Fahim Ashraf

    2013-01-01

    The present study was conducted for the optimization of pretreatment process that was used for enzymatic hydrolysis of lignocellulosic biomass (Water Hyacinth, WH), which is a renewable resource for the production of bioethanol with decentralized availability. Response surface methodology has been employed for the optimization of temperature (oC), time (hr) and different concentrations of maleic acid (MA), sulfuric acid (SA) and phosphoric acid (PA) that seemed to be significant variables with P < 0.05. High F and R2 values and low P-value for hydrolysis yield indicated the model predictability. The pretreated biomass producing 39.96 g/l, 39.86 g/l and 37.9 g/l of reducing sugars during enzymatic hydrolysis with yield 79.93, 78.71 and 75.9 % from PA, MA and SA treated respectively. The order of catalytic effectiveness for hydrolysis yield was found to be phosphoric acid > maleic acid > sulfuric acid. Mixture of sugars was obtained during dilute acid pretreatment with glucose being the most prominent sugar while pure glucose was obtained during enzymatic hydrolysis. The resulting sugars, obtained during enzymatic hydrolysis were finally fermented to ethanol, with yield 0.484 g/g of reducing sugars which is 95 % of theoretical yield (0.51 g/g glucose) by using commercial baker's yeast (Sacchromyces cerveasiae). PMID:26417215

  11. Process optimization of rolling for zincked sheet technology using response surface methodology and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Liang-Bo; Chen, Fang

    2017-07-01

    Numerical simulation and intelligent optimization technology were adopted for rolling and extrusion of zincked sheet. By response surface methodology (RSM), genetic algorithm (GA) and data processing technology, an efficient optimization of process parameters for rolling of zincked sheet was investigated. The influence trend of roller gap, rolling speed and friction factor effects on reduction rate and plate shortening rate were analyzed firstly. Then a predictive response surface model for comprehensive quality index of part was created using RSM. Simulated and predicted values were compared. Through genetic algorithm method, the optimal process parameters for the forming of rolling were solved. They were verified and the optimum process parameters of rolling were obtained. It is feasible and effective.

  12. Numerical optimization of conical flow waveriders including detailed viscous effects

    NASA Technical Reports Server (NTRS)

    Bowcutt, Kevin G.; Anderson, John D., Jr.; Capriotti, Diego

    1987-01-01

    A family of optimized hypersonic waveriders is generated and studied wherein detailed viscous effects are included within the optimization process itself. This is in contrast to previous optimized waverider work, wherein purely inviscid flow is used to obtain the waverider shapes. For the present waveriders, the undersurface is a streamsurface of an inviscid conical flowfield, the upper surface is a streamsurface of the inviscid flow over a tapered cylinder (calculated by the axisymmetric method of characteristics), and the viscous effects are treated by integral solutions of the boundary layer equations. Transition from laminar to turbulent flow is included within the viscous calculations. The optimization is carried out using a nonlinear simplex method. The resulting family of viscous hypersonic waveriders yields predicted high values of lift/drag, high enough to break the L/D barrier based on experience with other hypersonic configurations. Moreover, the numerical optimization process for the viscous waveriders results in distinctly different shapes compared to previous work with inviscid-designed waveriders. Also, the fine details of the viscous solution, such as how the shear stress is distributed over the surface, and the location of transition, are crucial to the details of the resulting waverider geometry. Finally, the moment coefficient variations and heat transfer distributions associated with the viscous optimized waveriders are studied.

  13. Optimal flight initiation distance.

    PubMed

    Cooper, William E; Frederick, William G

    2007-01-07

    Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.

  14. Oxidative degradation of biorefinery lignin obtained after pretreatment of forest residues of Douglas Fir.

    PubMed

    Srinivas, Keerthi; de Carvalho Oliveira, Fernanda; Teller, Philip Johan; Gonҫalves, Adilson Roberto; Helms, Gregory L; Ahring, Birgitte Kaer

    2016-12-01

    Harvested forest residues are usually considered a fire hazards and used as "hog-fuel" which results in air pollution. In this study, the biorefinery lignin stream obtained after wet explosion pretreatment and enzymatic hydrolysis of forestry residues of Douglas Fir (FS-10) was characterized and further wet oxidized under alkaline conditions. The studies indicated that at 10% solids, 11.7wt% alkali and 15min residence time, maximum yields were obtained for glucose (12.9wt%), vanillin (0.4wt%) at 230°C; formic acid (11.6wt%) at 250°C; acetic acid (10.7wt%), hydroxybenzaldehyde (0.2wt%), syringaldehyde (0.13wt%) at 280°C; and lactic acid (12.4wt%) at 300°C. FTIR analysis of the solid residue after wet oxidation showed that the aromatic skeletal vibrations relating to lignin compounds increased with temperature indicating that higher severity could result in increased lignin oxidation products. The results obtained, as part of the study, is significant for understanding and optimizing processes for producing high-value bioproducts from forestry residues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Optimization of a new flow design for solid oxide cells using computational fluid dynamics modelling

    NASA Astrophysics Data System (ADS)

    Duhn, Jakob Dragsbæk; Jensen, Anker Degn; Wedel, Stig; Wix, Christian

    2016-12-01

    Design of a gas distributor to distribute gas flow into parallel channels for Solid Oxide Cells (SOC) is optimized, with respect to flow distribution, using Computational Fluid Dynamics (CFD) modelling. The CFD model is based on a 3d geometric model and the optimized structural parameters include the width of the channels in the gas distributor and the area in front of the parallel channels. The flow of the optimized design is found to have a flow uniformity index value of 0.978. The effects of deviations from the assumptions used in the modelling (isothermal and non-reacting flow) are evaluated and it is found that a temperature gradient along the parallel channels does not affect the flow uniformity, whereas a temperature difference between the channels does. The impact of the flow distribution on the maximum obtainable conversion during operation is also investigated and the obtainable overall conversion is found to be directly proportional to the flow uniformity. Finally the effect of manufacturing errors is investigated. The design is shown to be robust towards deviations from design dimensions of at least ±0.1 mm which is well within obtainable tolerances.

  16. Enhancing Polyhedral Relaxations for Global Optimization

    ERIC Educational Resources Information Center

    Bao, Xiaowei

    2009-01-01

    During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…

  17. Optimal savings and the value of population.

    PubMed

    Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P

    2007-11-20

    We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.

  18. Optimal savings and the value of population

    PubMed Central

    Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.

    2007-01-01

    We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059

  19. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  20. Optimization of complex slater-type functions with analytic derivative methods for describing photoionization differential cross sections.

    PubMed

    Matsuzaki, Rei; Yabushita, Satoshi

    2017-05-05

    The complex basis function (CBF) method applied to various atomic and molecular photoionization problems can be interpreted as an L2 method to solve the driven-type (inhomogeneous) Schrödinger equation, whose driven term being dipole operator times the initial state wave function. However, efficient basis functions for representing the solution have not fully been studied. Moreover, the relation between their solution and that of the ordinary Schrödinger equation has been unclear. For these reasons, most previous applications have been limited to total cross sections. To examine the applicability of the CBF method to differential cross sections and asymmetry parameters, we show that the complex valued solution to the driven-type Schrödinger equation can be variationally obtained by optimizing the complex trial functions for the frequency dependent polarizability. In the test calculations made for the hydrogen photoionization problem with five or six complex Slater-type orbitals (cSTOs), their complex valued expansion coefficients and the orbital exponents have been optimized with the analytic derivative method. Both the real and imaginary parts of the solution have been obtained accurately in a wide region covering typical molecular regions. Their phase shifts and asymmetry parameters are successfully obtained by extrapolating the CBF solution from the inner matching region to the asymptotic region using WKB method. The distribution of the optimized orbital exponents in the complex plane is explained based on the close connection between the CBF method and the driven-type equation method. The obtained information is essential to constructing the appropriate basis sets in future molecular applications. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Optimization and evaluation of metal injection molding by using X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Shidi; Zhang, Ruijie; Qu, Xuanhui, E-mail: quxh@ustb.edu.cn

    2015-06-15

    6061 aluminum alloy and 316L stainless steel green bodies were obtained by using different injection parameters (injection pressure, speed and temperature). After injection process, the green bodies were scanned by X-ray tomography. The projection and reconstruction images show the different kinds of defects obtained by the improper injection parameters. Then, 3D rendering of the Al alloy green bodies was used to demonstrate the spatial morphology characteristics of the serious defects. Based on the scanned and calculated results, it is convenient to obtain the proper injection parameters for the Al alloy. Then, reasons of the defect formation were discussed. During moldmore » filling, the serious defects mainly formed in the case of low injection temperature and high injection speed. According to the gray value distribution of projection image, a threshold gray value was obtained to evaluate whether the quality of green body can meet the desired standard. The proper injection parameters of 316L stainless steel can be obtained efficiently by using the method of analyzing the Al alloy injection. - Highlights: • Different types of defects in green bodies were scanned by using X-ray tomography. • Reasons of the defect formation were discussed. • Optimization of the injection parameters can be simplified greatly by the way of X-ray tomography. • Evaluation standard of the injection process can be obtained by using the gray value distribution of projection image.« less

  2. Optimizing Reservoir Operation to Adapt to the Climate Change

    NASA Astrophysics Data System (ADS)

    Madadgar, S.; Jung, I.; Moradkhani, H.

    2010-12-01

    Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.

  3. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  4. Optimization of IBF parameters based on adaptive tool-path algorithm

    NASA Astrophysics Data System (ADS)

    Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi

    2018-03-01

    As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.

  5. Optimization of an auto-thermal ammonia synthesis reactor using cyclic coordinate method

    NASA Astrophysics Data System (ADS)

    A-N Nguyen, T.; Nguyen, T.-A.; Vu, T.-D.; Nguyen, K.-T.; K-T Dao, T.; P-H Huynh, K.

    2017-06-01

    The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone, the initial nitrogen proportion. The optimal problem requires the maximization of an objective function which is multivariable function and subject to a number of equality constraints involving the solution of coupled differential equations and also inequality constraint. The cyclic coordinate search was applied to solve the multivariable-optimization problem. In each coordinate, the golden section method was applied to find the maximum value. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results obtained from this study are also compared to the results from the literature.

  6. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Determination of the Optimal Fourier Number on the Dynamic Thermal Transmission

    NASA Astrophysics Data System (ADS)

    Bruzgevičius, P.; Burlingis, A.; Norvaišienė, R.

    2016-12-01

    This article represents the result of experimental research on transient heat transfer in a multilayered (heterogeneous) wall. Our non-steady thermal transmission simulation is based on a finite-difference calculation method. The value of a Fourier number shows the similarity of thermal variation in conditional layers of an enclosure. Most scientists recommend using no more than a value of 0.5 for the Fourier number when performing calculations on dynamic (transient) heat transfer. The value of the Fourier number is determined in order to acquire reliable calculation results with optimal accuracy. To compare the results of simulation with experimental research, a transient heat transfer calculation spreadsheet was created. Our research has shown that a Fourier number of around 0.5 or even 0.32 is not sufficient ({≈ }17 % of oscillation amplitude) for calculations of transient heat transfer in a multilayered wall. The least distorted calculation results were obtained when the multilayered enclosure was divided into conditional layers with almost equal Fourier number values and when the value of the Fourier number was around 1/6, i.e., approximately 0.17. Statistical deviation analysis using the Statistical Analysis System was applied to assess the accuracy of the spreadsheet calculation and was developed on the basis of our established methodology. The mean and median absolute error as well as their confidence intervals has been estimated by the two methods with optimal accuracy ({F}_{oMDF}= 0.177 and F_{oEPS}= 0.1633 values).

  8. Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation.

    PubMed

    Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D

    2002-01-01

    This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.

  9. Dependency of Optimal Parameters of the IRIS Template on Image Quality and Border Detection Error

    NASA Astrophysics Data System (ADS)

    Matveev, I. A.; Novik, V. P.

    2017-05-01

    Generation of a template containing spatial-frequency features of iris is an important stage of identification. The template is obtained by a wavelet transform in an image region specified by iris borders. One of the main characteristics of the identification system is the value of recognition error, equal error rate (EER) is used as criterion here. The optimal values (in sense of minimizing the EER) of wavelet transform parameters depend on many factors: image quality, sharpness, size of characteristic objects, etc. It is hard to isolate these factors and their influences. The work presents an attempt to study an influence of following factors to EER: iris segmentation precision, defocus level, noise level. Several public domain iris image databases were involved in experiments. The images were subjected to modelled distortions of said types. The dependencies of wavelet parameter and EER values from the distortion levels were build. It is observed that the increase of the segmentation error and image noise leads to the increase of the optimal wavelength of the wavelets, whereas the increase of defocus level leads to decreasing of this value.

  10. Generalized Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming.

    DTIC Science & Technology

    1983-04-11

    existing ones. * -37- !I T-472 REFERENCES [1] Avriel, M., W. E. Diewert, S. Schaible and W. T. Ziemba (1981). Introduction to concave and generalized concave...functions. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba , eds.), Academic Press, New York, pp. 21-50. (21 Bank...Optimality conditions involving generalized convex mappings. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba

  11. Optimal trajectory generation for mechanical arms. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Iemenschot, J. A.

    1972-01-01

    A general method of generating optimal trajectories between an initial and a final position of an n degree of freedom manipulator arm with nonlinear equations of motion is proposed. The method is based on the assumption that the time history of each of the coordinates can be expanded in a series of simple time functions. By searching over the coefficients of the terms in the expansion, trajectories which minimize the value of a given cost function can be obtained. The method has been applied to a planar three degree of freedom arm.

  12. Phonon optimized interatomic potential for aluminum

    NASA Astrophysics Data System (ADS)

    Muraleedharan, Murali Gopal; Rohskopf, Andrew; Yang, Vigor; Henry, Asegun

    2017-12-01

    We address the problem of generating a phonon optimized interatomic potential (POP) for aluminum. The POP methodology, which has already been shown to work for semiconductors such as silicon and germanium, uses an evolutionary strategy based on a genetic algorithm (GA) to optimize the free parameters in an empirical interatomic potential (EIP). For aluminum, we used the Vashishta functional form. The training data set was generated ab initio, consisting of forces, energy vs. volume, stresses, and harmonic and cubic force constants obtained from density functional theory (DFT) calculations. Existing potentials for aluminum, such as the embedded atom method (EAM) and charge-optimized many-body (COMB3) potential, show larger errors when the EIP forces are compared with those predicted by DFT, and thus they are not particularly well suited for reproducing phonon properties. Using a comprehensive Vashishta functional form, which involves short and long-ranged interactions, as well as three-body terms, we were able to better capture interactions that reproduce phonon properties accurately. Furthermore, the Vashishta potential is flexible enough to be extended to Al2O3 and the interface between Al-Al2O3, which is technologically important for combustion of solid Al nano powders. The POP developed here is tested for accuracy by comparing phonon thermal conductivity accumulation plots, density of states, and dispersion relations with DFT results. It is shown to perform well in molecular dynamics (MD) simulations as well, where the phonon thermal conductivity is calculated via the Green-Kubo relation. The results are within 10% of the values obtained by solving the Boltzmann transport equation (BTE), employing Fermi's Golden Rule to predict the phonon-phonon relaxation times.

  13. Comparison of the T2-star Values of Placentas Obtained from Pre-eclamptic Patients with Those of a Control Group: an Ex-vivo Magnetic Resonance Imaging Study.

    PubMed

    Yurttutan, Nursel; Bakacak, Murat; Kızıldağ, Betül

    2017-09-29

    Endotel dysfunction, vasoconstriction, and oxidative stress are described in the pathophysiology of pre-eclampsia, but its aetiology has not been revealed clearly. To examine whether there is a difference between the placentas of pre-eclamptic pregnant women and those of a control group in terms of their T2 star values. Case-control study. Twenty patients diagnosed with pre-eclampsia and 22 healthy controls were included in this study. The placentas obtained after births performed via Caesarean section were taken into the magnetic resonance imaging area in plastic bags within the first postnatal hour, and imaging was performed via modified DIXON-Quant sequence. Average values were obtained by performing T2 star measurements from four localisations on the placentas. T2 star values measured in the placentas of the control group were found to be significantly lower than those in the pre-eclampsia group (p<0.01). While the mean T2 star value in the pre-eclamptic group was found to be 37.48 ms (standard deviation ± 11.3), this value was 28.74 (standard deviation ± 8.08) in the control group. The cut-off value for the T2 star value, maximising the accuracy of diagnosis, was 28.59 ms (area under curve: 0.741; 95% confidence interval: 0.592-0.890); sensitivity and specificity were 70% and 63.6%, respectively. This study, the T2 star value, which is an indicator of iron amount, was found to be significantly lower in the control group than in the pre-eclampsia group. This may be related to the reduction in blood flow to the placenta due to endothelial dysfunction and vasoconstriction, which are important in pre-eclampsia pathophysiology.

  14. Correlation of Lactic Acid and Base Deficit Values Obtained From Arterial and Peripheral Venous Samples in a Pediatric Population During Intraoperative Care.

    PubMed

    Bordes, Brianne M; Walia, Hina; Sebastian, Roby; Martin, David; Tumin, Dmitry; Tobias, Joseph D

    2017-12-01

    Lactic acid and base deficit (BD) values are frequently monitored in the intensive care unit and operating room setting to evaluate oxygenation, ventilation, cardiac output, and peripheral perfusion. Although generally obtained from an arterial cannula, such access may not always be available. The current study prospectively investigates the correlation of arterial and peripheral venous values of BD and lactic acid. The study cohort included 48 patients. Arterial BD values ranged from -8 to 4 mEq/L and peripheral venous BD values ranged from -8 to 4 mEq/L. Arterial lactic acid values ranged from 0.36 to 2.45 μmol/L and peripheral venous lactic acid values ranged from 0.38 to 4 μmol/L. The arterial BD (-0.4 ± 2.2 mEq/L) was not significantly different from the peripheral venous BD (-0.6 ± 2.2 mEq/L). The arterial lactic acid (1.0 ± 0.5 μmol/L) was not significantly different from the peripheral venous lactic acid (1.1 ± 0.6 μmol/L). Pearson correlation coefficients demonstrated a very high correlation between arterial and peripheral venous BD ( r = .88, P < .001) and between arterial and peripheral venous lactic acid ( r = .67, P < .001). Bland-Altman plots of both pairs of measures showed that the majority of observations fell within the 95% limits of agreement. Least-squares regression indicated that a 1-unit increase in arterial BD corresponded to a 0.9-unit increase in peripheral venous BD (95% confidence interval [CI]: 0.7-1.0; P < .001) and a 1-unit increase in arterial lactic acid corresponded to a 0.9-unit increase in peripheral venous lactic acid (95% CI: 0.6-1.2; P < .001). These data demonstrate that there is a clinically useful correlation between arterial and peripheral venous lactic acid and BD values.

  15. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  16. Combinatorial optimization in foundry practice

    NASA Astrophysics Data System (ADS)

    Antamoshkin, A. N.; Masich, I. S.

    2016-04-01

    The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.

  17. The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2017-01-01

    Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  18. Using the Climbing Drum Peel (CDP) Test to Obtain a G(sub IC) value for Core/Facesheet Bonds

    NASA Technical Reports Server (NTRS)

    Nettles, A. T.; Gregory, Elizabeth D.; Jackson, Justin R.

    2006-01-01

    A method of measuring the Mode I fracture toughness of core/facesheet bonds in sandwich Structures is desired, particularly with the widespread use of models that need this data as input. This study examined if a critical strain energy release rate, G(sub IC), can be obtained from the climbing drum peel (CDP) test. The CDP test is relatively simple to perform and does not rely on measuring small crack lengths such as required by the double cantilever beam (DCB) test. Simple energy methods were used to calculate G(sub IC) from CDP test data on composite facesheets bonded to a honeycomb core. Facesheet thicknesses from 2 to 5 plies were tested to examine the upper and lower bounds on facesheet thickness requirements. Results from the study suggest that the CDP test, with certain provisions, can be used to find the GIG value of a core/facesheet bond.

  19. Optimal preconditioning of lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Izquierdo, Salvador; Fueyo, Norberto

    2009-09-01

    A preconditioning technique to accelerate the simulation of steady-state problems using the single-relaxation-time (SRT) lattice Boltzmann (LB) method was first proposed by Guo et al. [Z. Guo, T. Zhao, Y. Shi, Preconditioned lattice-Boltzmann method for steady flows, Phys. Rev. E 70 (2004) 066706-1]. The key idea in this preconditioner is to modify the equilibrium distribution function in such a way that, by means of a Chapman-Enskog expansion, a time-derivative preconditioner of the Navier-Stokes (NS) equations is obtained. In the present contribution, the optimal values for the free parameter γ of this preconditioner are searched both numerically and theoretically; the later with the aid of linear-stability analysis and with the condition number of the system of NS equations. The influence of the collision operator, single- versus multiple-relaxation-times (MRT), is also studied. Three steady-state laminar test cases are used for validation, namely: the two-dimensional lid-driven cavity, a two-dimensional microchannel and the three-dimensional backward-facing step. Finally, guidelines are suggested for an a priori definition of optimal preconditioning parameters as a function of the Reynolds and Mach numbers. The new optimally preconditioned MRT method derived is shown to improve, simultaneously, the rate of convergence, the stability and the accuracy of the lattice Boltzmann simulations, when compared to the non-preconditioned methods and to the optimally preconditioned SRT one. Additionally, direct time-derivative preconditioning of the LB equation is also studied.

  20. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The

  1. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  2. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  3. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful

  4. Actinobacillus succinogenes ATCC 55618 Fermentation Medium Optimization for the Production of Succinic Acid by Response Surface Methodology

    PubMed Central

    Zhu, Li-Wen; Wang, Cheng-Cheng; Liu, Rui-Sang; Li, Hong-Mei; Wan, Duan-Ji; Tang, Ya-Jie

    2012-01-01

    As a potential intermediary feedstock, succinic acid takes an important place in bulk chemical productions. For the first time, a method combining Plackett-Burman design (PBD), steepest ascent method (SA), and Box-Behnken design (BBD) was developed to optimize Actinobacillus succinogenes ATCC 55618 fermentation medium. First, glucose, yeast extract, and MgCO3 were identified to be key medium components by PBD. Second, preliminary optimization was run by SA method to access the optimal region of the key medium components. Finally, the responses, that is, the production of succinic acid, were optimized simultaneously by using BBD, and the optimal concentration was located to be 84.6 g L−1 of glucose, 14.5 g L−1 of yeast extract, and 64.7 g L−1 of MgCO3. Verification experiment indicated that the maximal succinic acid production of 52.7 ± 0.8 g L−1 was obtained under the identified optimal conditions. The result agreed with the predicted value well. Compared with that of the basic medium, the production of succinic acid and yield of succinic acid against glucose were enhanced by 67.3% and 111.1%, respectively. The results obtained in this study may be useful for the industrial commercial production of succinic acid. PMID:23093852

  5. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  6. Time-oriented experimental design method to optimize hydrophilic matrix formulations with gelation kinetics and drug release profiles.

    PubMed

    Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon

    2011-04-04

    A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Finite horizon optimum control with and without a scrap value

    NASA Astrophysics Data System (ADS)

    Neck, R.; Blueschke-Nikolaeva, V.; Blueschke, D.

    2017-06-01

    In this paper, we study the effects of scrap values on the solutions of optimal control problems with finite time horizon. We show how to include a scrap value, either for the state variables or for the state and the control variables, in the OPTCON2 algorithm for the optimal control of dynamic economic systems. We ask whether the introduction of a scrap value can serve as a substitute for an infinite horizon in economic policy optimization problems where the latter option is not available. Using a simple numerical macroeconomic model, we demonstrate that the introduction of a scrap value cannot induce control policies which can be expected for problems with an infinite time horizon.

  8. Technique of optimization of minimum temperature driving forces in the heaters of regeneration system of a steam turbine unit

    NASA Astrophysics Data System (ADS)

    Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang

    2016-03-01

    At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.

  9. Optimization of Thermal Preprocessing for Efficient Combustion of Woody Biomass

    NASA Astrophysics Data System (ADS)

    Kumagai, Seiji; Aranai, Masahiko; Takeda, Koichi; Enda, Yukio

    We attempted to optimize both drying time and temperature for stem chips and bark of Japanese cedar in order to obtain the largest release of combustion heat. Moisture release rates of the stem and bark during air-drying in an oven were evaluated. Higher and lower heating values of stem and bark, dried at different temperatures for different lengths of time, were also evaluated. The drying conditions of 180°C and 30min resulted in the largest heat release of the stem (˜ 4%increase compared to conditions of 105°C and 30min). The optimal drying conditions were not obvious for bark. However, for the drying process in actual plants, the conditions of 180°C and 30min were suggested to be acceptable for both stem and bark.

  10. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value

  11. 47 CFR 54.615 - Obtaining services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... provided under § 54.621, that the requester cannot obtain toll-free access to an Internet service provider... thing of value; (6) If the service or services are being purchased as part of an aggregated purchase...

  12. A new threshold of apparent diffusion coefficient values in white matter after successful tissue plasminogen activator treatment for acute brain ischemia.

    PubMed

    Sato, Atsushi; Shimizu, Yusaku; Koyama, Junichi; Hongo, Kazuhiro

    2017-06-01

    Tissue plasminogen activator (tPA) is effective for the treatment of acute brain ischemia, but may trigger fatal brain edema or hemorrhage if the brain ischemia results in a large infarct. Herein, we attempted to predict the extent of infarcts by determining the optimal threshold of ADC values on DWI that predictively distinguishes between infarct and reversible areas, and by reconstructing color-coded images based on this threshold. The study subjects consisted of 36 patients with acute brain ischemia in whom MRA had confirmed reopening of the occluded arteries in a short time (mean: 99min) after tPA treatment. We measured the apparetnt diffusion coefficient (ADC) values in several small regions of interest over the white matter within high-intensity areas on the initial diffusion weighted image (DWI); then, by comparing the findings to the follow-up images, we obtained the optimal threshold of ADC values using receiver-operating characteristic analysis. The threshold obtained (583×10 -6 m 2 /s) was lower than those previously reported; this threshold could distinguish between infarct and reversible areas with considerable accuracy (sensitivity: 0.87, specificity: 0.94). The threshold obtained and the reconstructed images were predictive of the final radiological result of tPA treatment, and this threshold may be helpful in determining the appropriate management of patients with acute brain ischemia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-01-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  14. Robust optimization based upon statistical theory.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  15. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  16. Optimization of the Conical Angle Design in Conical Implant-Abutment Connections: A Pilot Study Based on the Finite Element Method.

    PubMed

    Yao, Kuang-Ta; Chen, Chen-Sheng; Cheng, Cheng-Kung; Fang, Hsu-Wei; Huang, Chang-Hung; Kao, Hung-Chan; Hsu, Ming-Lun

    2018-02-01

    Conical implant-abutment connections are popular for their excellent connection stability, which is attributable to frictional resistance in the connection. However, conical angles, the inherent design parameter of conical connections, exert opposing effects on 2 influencing factors of the connection stability: frictional resistance and abutment rigidity. This pilot study employed an optimization approach through the finite element method to obtain an optimal conical angle for the highest connection stability in an Ankylos-based conical connection system. A nonlinear 3-dimensional finite element parametric model was developed according to the geometry of the Ankylos system (conical half angle = 5.7°) by using the ANSYS 11.0 software. Optimization algorithms were conducted to obtain the optimal conical half angle and achieve the minimal value of maximum von Mises stress in the abutment, which represents the highest connection stability. The optimal conical half angle obtained was 10.1°. Compared with the original design (5.7°), the optimal design demonstrated an increased rigidity of abutment (36.4%) and implant (25.5%), a decreased microgap at the implant-abutment interface (62.3%), a decreased contact pressure (37.9%) with a more uniform stress distribution in the connection, and a decreased stress in the cortical bone (4.5%). In conclusion, the methodology of design optimization to determine the optimal conical angle of the Ankylos-based system is feasible. Because of the heterogeneity of different systems, more studies should be conducted to define the optimal conical angle in various conical connection designs.

  17. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  18. Optimization of Medium Composition for the Production of Neomycin by Streptomyces fradiae NCIM 2418 in Solid State Fermentation

    PubMed Central

    Vastrad, B. M.; Neelagund, S. E.

    2014-01-01

    Neomycin production of Streptomyces fradiae NCIM 2418 was optimized by using response surface methodology (RSM), which is powerful mathematical approach comprehensively applied in the optimization of solid state fermentation processes. In the first step of optimization, with Placket-Burman design, ammonium chloride, sodium nitrate, L-histidine, and ammonium nitrate were established to be the crucial nutritional factors affecting neomycin production significantly. In the second step, a 24 full factorial central composite design and RSM were applied to determine the optimal concentration of significant variable. A second-order polynomial was determined by the multiple regression analysis of the experimental data. The optimum values for the important nutrients for the maximum were obtained as follows: ammonium chloride 2.00%, sodium nitrate 1.50%, L-histidine 0.250%, and ammonium nitrate 0.250% with a predicted value of maximum neomycin production of 20,000 g kg−1 dry coconut oil cake. Under the optimal condition, the practical neomycin production was 19,642 g kg−1 dry coconut oil cake. The determination coefficient (R 2) was 0.9232, which ensures an acceptable admissibility of the model. PMID:25009746

  19. Optimization of diesel oil biodegradation in seawater using statistical experimental methodology.

    PubMed

    Xia, Wenxiang; Li, Jincheng; Xia, Yan; Song, Zhiwen; Zhou, Jihong

    2012-01-01

    Petroleum hydrocarbons released into the environment can be harmful to higher organisms, but they can be utilized by microorganisms as the sole source of energy for metabolism. To investigate the optimal conditions of diesel oil biodegradation, the Plackett-Burman (PB) design was used for the optimization in the first step, and N source (NaNO₃), P source (KH₂PO₄) and pH were found to be significant factors affecting oil degradation. Then the response surface methodology (RSM) using a central composite design (CCD) was adopted for the augmentation of diesel oil biodegradation and a fitted quadratic model was obtained. The model F-value of 27.25 and the low probability value (<0.0001) indicate that the model is significant and that the concentration of NaNO₃N, KH₂PO₄ and pH had significant effects on oil removal during the study. Three-dimensional response surface plots were constructed by plotting the response (oil degradation efficiency) on the z-axis against any two independent variables, and the optimal biodegradation conditions of diesel oil (original total petroleum hydrocarbons 125 mg/L) were determined as follows: NaNO₃ 0.143 g, KH₂PO₄ 0.022 g and pH 7.4. These results fit quite well with the C, N and P ratio in biological cells. Results from the present study might provide a new method to estimate the optimal nitrogen and phosphorus concentration in advance for oil biodegradation according to the composition of petroleum.

  20. Optimization of extraction conditions for osthol, a melanogenesis inhibitor from Cnidium monnieri fruits.

    PubMed

    Beom Kim, Seon; Kim, CheongTaek; Liu, Qing; Hee Jo, Yang; Joo Choi, Hak; Hwang, Bang Yeon; Kyum Kim, Sang; Kyeong Lee, Mi

    2016-08-01

    Coumarin derivatives have been reported to inhibit melanin biosynthesis. The melanogenesis inhibitory activity of osthol, a major coumarin of the fruits of Cnidium monnieri Cusson (Umbelliferae), and optimized extraction conditions for the maximum yield from the isolation of osthol from C. monnieri fruits were investigated. B16F10 melanomas were treated with osthol at concentration of 1, 3, and 10 μM for 72 h. The expression of melanogenesis genes, such as tyrosinase, TRP-1, and TRP-2 was also assessed. For optimization, extraction factors such as extraction solvent, extraction time, and sample/solvent ratio were tested and optimized for maximum yield of osthol using response surface methodology with the Box-Behnken design (BBD). Osthol inhibits melanin content in B16F10 melanoma cells with an IC50 value of 4.9 μM. The melanogenesis inhibitory activity of osthol was achieved not by direct inhibition of tyrosinase activity but by inhibiting melanogenic enzyme expressions, such as tyrosinase, TRP-1, and TRP-2. The optimal condition was obtained as a sample/solvent ratio, 1500 mg/10 ml; an extraction time 30.3 min; and a methanol concentration of 97.7%. The osthol yield under optimal conditions was found to be 15.0 mg/g dried samples, which were well matched with the predicted value of 14.9 mg/g dried samples. These results will provide useful information about optimized extraction conditions for the development of osthol as cosmetic therapeutics to reduce skin hyperpigmentation.

  1. Active inference and epistemic value.

    PubMed

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  2. [Optimal beam quality for chest digital radiography].

    PubMed

    Oda, Nobuhiro; Tabata, Yoshito; Nakano, Tsutomu

    2014-11-01

    To investigate the optimal beam quality for chest computed radiography (CR), we measured the radiographic contrast and evaluated the image quality of chest CR using various X-ray tube voltages. The contrast between lung and rib or heart increased on CR images obtained by lowering the tube voltage from 140 to 60 kV, but the degree of increase was less. Scattered radiation was reduced on CR images with a lower tube voltage. The Wiener spectrum of CR images with a low tube voltage showed a low value under identical conditions of amount of light stimulated emission. The quality of chest CR images obtained using a lower tube voltage (80 kV and 100 kV) was evaluated as being superior to those obtained with a higher tube voltage (120 kV and 140 kV). Considering the problem of tube loading and exposure in clinical applications, a tube voltage of 90 to 100 kV (0.1 mm copper filter backed by 0.5 mm aluminum) is recommended for chest CR.

  3. Characterization and Optimization Design of the Polymer-Based Capacitive Micro-Arrayed Ultrasonic Transducer

    NASA Astrophysics Data System (ADS)

    Chiou, De-Yi; Chen, Mu-Yueh; Chang, Ming-Wei; Deng, Hsu-Cheng

    2007-11-01

    This study constructs an electromechanical finite element model of the polymer-based capacitive micro-arrayed ultrasonic transducer (P-CMUT). The electrostatic-structural coupled-field simulations are performed to investigate the operational characteristics, such as collapse voltage and resonant frequency. The numerical results are found to be in good agreement with experimental observations. The study of influence of each defined parameter on the collapse voltage and resonant frequency are also presented. To solve some conflict problems in diversely physical fields, an integrated design method is developed to optimize the geometric parameters of the P-CMUT. The optimization search routine conducted using the genetic algorithm (GA) is connected with the commercial FEM software ANSYS to obtain the best design variable using multi-objective functions. The results show that the optimal parameter values satisfy the conflicting objectives, namely to minimize the collapse voltage while simultaneously maintaining a customized frequency. Overall, the present result indicates that the combined FEM/GA optimization scheme provides an efficient and versatile approach of optimization design of the P-CMUT.

  4. A VVWBO-BVO-based GM (1,1) and its parameter optimization by GRA-IGSA integration algorithm for annual power load forecasting

    PubMed Central

    Wang, Hongguang

    2018-01-01

    Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450

  5. Balanced MR cholangiopancreatography with motion-sensitized driven-equilibrium (MSDE) preparation: Feasibility and optimization of imaging parameters.

    PubMed

    Nakayama, Tomohiro; Nishie, Akihiro; Yoshiura, Takashi; Asayama, Yoshiki; Ishigami, Kousei; Kakihara, Daisuke; Obara, Makoto; Honda, Hiroshi

    2015-12-01

    To show the feasibility of motion-sensitized driven-equilibrium-balanced magnetic resonance cholangiopancreatography and to determine the optimal velocity encoding (VENC) value. Sixteen healthy volunteers underwent MRI study using a 1.5-T clinical unit and a 32-channel body array coil. For each volunteer, images were obtained using the following seven respiratory-triggered sequences: (1) balanced magnetic resonance cholangiopancreatography without motion-sensitized driven-equilibrium, and (2)-(7) balanced magnetic resonance cholangiopancreatography with motion-sensitized driven-equilibrium, with VENC=1, 3, 5, 7, 9 and ∞cm/s for the x-, y-, and z-directions, respectively. Quantitative evaluation was obtained by measuring the maximum signal intensity of the common hepatic duct, portal vein, liver tissue including visible peripheral vessels, and liver tissue excluding visible peripheral vessels that were evaluated. We compared the contrast ratios of portal vein/common hepatic duct, liver tissue including visible peripheral vessels/common hepatic duct and liver tissue excluding visible peripheral vessels/common hepatic duct among the five finite sequences (VENC=1, 3, 5, 7, and 9cm/s). Statistical comparisons were performed using the t-test for paired data with the Bonferroni correction. Suppression of blood vessel signals was achieved with motion-sensitized driven-equilibrium sequences. We found the optimal VENC values to be either 3 or 5cm/s with the best suppression of relative vessel signals to bile ducts. At a lower VENC value (1cm/s), the bile duct signal was reduced, presumably due to minimal biliary flow. The feasibility of motion-sensitized driven-equilibrium-balanced magnetic resonance cholangiopancreatography was suggested. The optimal VENC value was considered to be either 3 or 5cm/s. The clinical usefulness of this new magnetic resonance cholangiopancreatography sequence needs to be verified by further studies. Copyright © 2015 Elsevier Inc. All rights

  6. Molecular identification of potential denitrifying bacteria and use of D-optimal mixture experimental design for the optimization of denitrification process.

    PubMed

    Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel

    2016-04-01

    Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value). Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Determining optimal operation parameters for reducing PCDD/F emissions (I-TEQ values) from the iron ore sintering process by using the Taguchi experimental design.

    PubMed

    Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh

    2008-07-15

    This study is the first one using the Taguchi experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.

  8. Orthogonal optimization of a water hydraulic pilot-operated pressure-reducing valve

    NASA Astrophysics Data System (ADS)

    Mao, Xuyao; Wu, Chao; Li, Bin; Wu, Di

    2017-12-01

    In order to optimize the comprehensive characteristics of a water hydraulic pilot-operated pressure-reducing valve, numerical orthogonal experimental design was adopted. Six parameters of the valve, containing diameters of damping plugs, volume of spring chamber, half cone angle of main spool, half cone angle of pilot spool, mass of main spool and diameter of main spool, were selected as the orthogonal factors, and each factor has five different levels. An index of flowrate stability, pressure stability and pressure overstrike stability (iFPOS) was used to judge the merit of each orthogonal attempt. Embedded orthogonal process turned up and a final optimal combination of these parameters was obtained after totally 50 numerical orthogonal experiments. iFPOS could be low to a fairly low value which meant that the valve could have much better stabilities. During the optimization, it was also found the diameters of damping plugs and main spool played important roles in stability characteristics of the valve.

  9. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  10. Optimizing model: insemination, replacement, seasonal production, and cash flow.

    PubMed

    DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A

    1992-03-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.

  11. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  12. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  13. Using a Pareto-optimal solution set to characterize trade-offs between a broad range of values and preferences in climate risk management

    NASA Astrophysics Data System (ADS)

    Garner, Gregory; Reed, Patrick; Keller, Klaus

    2015-04-01

    Integrated assessment models (IAMs) are often used to inform the design of climate risk management strategies. Previous IAM studies have broken important new ground on analyzing the effects of parametric uncertainties, but they are often silent on the implications of uncertainties regarding the problem formulation. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the definition of the objective(s). The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decision makers, however, are often concerned with a broader range of values and preferences that may be poorly captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing (ii) the costs of abatement and (iii) the climate change damages. We use advanced multi-objective optimization methods to derive a set of Pareto-optimal solutions over which decision makers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  14. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  15. Optimal control, investment and utilization schemes for energy storage under uncertainty

    NASA Astrophysics Data System (ADS)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency

  16. A mesh gradient technique for numerical optimization

    NASA Technical Reports Server (NTRS)

    Willis, E. A., Jr.

    1973-01-01

    A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.

  17. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  18. Intraocular pressure values obtained by ocular response analyzer, dynamic contour tonometry, and goldmann tonometry in keratokonic corneas.

    PubMed

    Bayer, Atilla; Sahin, Afsun; Hürmeriç, Volkan; Ozge, Gökhan

    2010-01-01

    To determine the agreement between dynamic contour tonometer (DCT), Goldmann applanation tonometer (GAT), and Ocular Response Analyzer (ORA) in keratoconic corneas and to find out the effect of corneal biomechanics on intraocular pressure (IOP) measurements obtained by these devices. IOP was measured with the ORA, DCT, and GAT in random order in 120 eyes of 61 keratoconus patients. Central corneal thickness (CCT) and keratometry were measured after all IOP determinations had been made. The mean IOP measurement by the ORA and DCT was compared with the measurement by the GAT, using Student t test. Bland-Altman analysis was performed to assess the clinical agreement between these methods. The effect of corneal hysteresis (CH), corneal resistance factor (CRF), and CCT on measured IOP was explored by multiple backward stepwise linear regression analysis. The mean±SD patient age was 30.6±11.2 years. The mean±SD IOP measurement obtained with GAT, ORA Goldmann-correlated IOP (IOPg), ORA corneal-compensated IOP (IOPcc), and DCT was 10.96±2.8, 10.23±3.5, 14.65±2.8, and 15.42±2.7 mm Hg, respectively. The mean±SD CCT was 464.08±58.4 microns. The mean difference between IOPcc and GAT (P<0.0001), IOPcc and DCT (P<0.001), GAT and DCT (P<0.0001), IOPg and GAT (P<0.002), and IOPg and DCT (P<0.0001), was highly statistically significant. In multivariable regression analysis, DCT IOP and GAT IOP measurements were significantly associated with CH and CRF (P<0.0001 for both). DCT seemed to be affected by CH and CRF, and the IOP values tended to be higher when compared with GAT. ORA-measured IOPcc was found to be independent of CCT and suitable in comparison to the DCT in keratoconic eyes.

  19. Prolonged release matrix tablet of pyridostigmine bromide: formulation and optimization using statistical methods.

    PubMed

    Bolourchian, Noushin; Rangchian, Maryam; Foroutan, Seyed Mohsen

    2012-07-01

    The aim of this study was to design and optimize a prolonged release matrix formulation of pyridostigmine bromide, an effective drug in myasthenia gravis and poisoning with nerve gas, using hydrophilic - hydrophobic polymers via D-optimal experimental design. HPMC and carnauba wax as retarding agents as well as tricalcium phosphate were used in matrix formulation and considered as independent variables. Tablets were prepared by wet granulation technique and the percentage of drug released at 1 (Y(1)), 4 (Y(2)) and 8 (Y(3)) hours were considered as dependent variables (responses) in this investigation. These experimental responses were best fitted for the cubic, cubic and linear models, respectively. The optimal formulation obtained in this study, consisted of 12.8 % HPMC, 24.4 % carnauba wax and 26.7 % tricalcium phosphate, had a suitable prolonged release behavior followed by Higuchi model in which observed and predicted values were very close. The study revealed that D-optimal design could facilitate the optimization of prolonged release matrix tablet containing pyridostigmine bromide. Accelerated stability studies confirmed that the optimized formulation remains unchanged after exposing in stability conditions for six months.

  20. Optimization of a Future RLV Business Case using Multiple Strategic Market Prices

    NASA Astrophysics Data System (ADS)

    Charania, A.; Olds, J. R.

    2002-01-01

    There is a lack of depth in the current paradigm of conceptual level economic models used to evaluate the value and viability of future capital projects such as a commercial reusable launch vehicle (RLV). Current modeling methods assume a single price is charged to all customers, public or private, in order to optimize the economic metrics of interest. This assumption may not be valid given the different utility functions for space services of public and private entities. The government's requirements are generally more inflexible than its commercial counterparts. A government's launch schedules are much more rigid, choices of international launch services restricted, and launch specifications generally more stringent as well as numerous. These requirements generally make the government's demand curve more inelastic. Subsequently, a launch vehicle provider will charge a higher price (launch price per kg) to the government and may obtain a higher level of financial profit compared to an equivalent a commercial payload. This profit is not a sufficient condition to enable RLV development by itself but can help in making the financial situation slightly better. An RLV can potentially address multiple payload markets; each market has a different price elasticity of demand for both the commercial and government customer. Thus, a more resilient examination of the economic landscape requires optimization of multiple prices in which each price affects a different demand curve. Such an examination is performed here using the Cost and Business Analysis Module (CABAM), an MS-Excel spreadsheet-based model that attempts to couple both the demand and supply for space transportation services in the future. The demand takes the form of market assumptions (both near-term and far-term) and the supply comes from user-defined vehicles that are placed into the model. CABAM represents RLV projects as commercial endeavors with the possibility to model the effects of government

  1. Optimization of microwave assisted extraction of essential oils from Iranian Rosmarinus officinalis L. using RSM.

    PubMed

    Akhbari, Maryam; Masoum, Saeed; Aghababaei, Fahimeh; Hamedi, Sepideh

    2018-06-01

    In this study, the efficiencies of conventional hydro-distillation and novel microwave hydro-distillation methods in extraction of essential oil from Rosemary officinalis leaves have been compared. In order to attain the best yield and also highest quality of the essential oil in the microwave assisted method, the optimal values of operating parameters such as extraction time, microwave irradiation power and water volume to plant mass ratio were investigated using central composite design under response surface methodology. Optimal conditions for obtaining the maximum extraction yield in the microwave assisted method were predicted as follows: extraction time of 85 min, microwave power of 888 W, and water volume to plant mass ratio of 0.5 ml/g. The extraction yield at these predicted conditions was computed as 0.7756%. The qualities of the obtained essential oils under designed experiments were optimized based on total contents of four major compounds (α-pinene, 1,8-cineole, camphor and verbenone) which determined by gas chromatography equipped with mass spectroscopy (GC-MS). The highest essential oil quality (55.87%) was obtained at extraction time of 68 min; microwave irradiation power of 700 W; and water volume to plant mass ratio of zero.

  2. Quantitative cultures of bronchoscopically obtained specimens should be performed for optimal management of ventilator-associated pneumonia.

    PubMed

    Baselski, Vickie; Klutts, J Stacey; Baselski, Vickie; Klutts, J Stacey

    2013-03-01

    Ventilator-associated pneumonia (VAP) is a leading cause of health care-associated infection. It has a high rate of attributed mortality, and this mortality is increased in patients who do not receive appropriate empirical antimicrobial therapy. As a result of the overuse of broad-spectrum antimicrobials such as the carbapenems, strains of Acinetobacter, Enterobacteriaceae, and Pseudomonas aeruginosa susceptible only to polymyxins and tigecycline have emerged as important causes of VAP. The need to accurately diagnose VAP so that appropriate discontinuation or de-escalation of antimicrobial therapy can be initiated to reduce this antimicrobial pressure is essential. Practice guidelines for the diagnosis of VAP advocate the use of bronchoalveolar lavage (BAL) fluid obtained either bronchoscopically or by the use of a catheter passed through the endotracheal tube. The CDC recommends that quantitative cultures be performed on these specimens, using ≥ 10(4) CFU/ml to designate a positive culture (http://www.cdc.gov/nhsn/TOC_PSCManual.html, accessed 30 October 2012). However, there is no consensus in the clinical microbiology community as to whether these specimens should be cultured quantitatively, using the aforementioned designated bacterial cell count to designate infection, or by a semiquantitative approach. We have asked Vickie Baselski, University of Tennessee Health Science Center, who was the lead author on one of the seminal papers on quantitative BAL fluid culture, to explain why she believes that quantitative BAL fluid cultures are the optimal strategy for VAP diagnosis. We have Stacey Klutts, University of Iowa, to advocate the semiquantitative approach.

  3. A generalization of Fatou's lemma for extended real-valued functions on σ-finite measure spaces: with an application to infinite-horizon optimization in discrete time.

    PubMed

    Kamihigashi, Takashi

    2017-01-01

    Given a sequence [Formula: see text] of measurable functions on a σ -finite measure space such that the integral of each [Formula: see text] as well as that of [Formula: see text] exists in [Formula: see text], we provide a sufficient condition for the following inequality to hold: [Formula: see text] Our condition is considerably weaker than sufficient conditions known in the literature such as uniform integrability (in the case of a finite measure) and equi-integrability. As an application, we obtain a new result on the existence of an optimal path for deterministic infinite-horizon optimization problems in discrete time.

  4. Optimization of processing parameters for the preparation of phytosterol microemulsions by the solvent displacement method.

    PubMed

    Leong, Wai Fun; Che Man, Yaakob B; Lai, Oi Ming; Long, Kamariah; Misran, Misni; Tan, Chin Ping

    2009-09-23

    The purpose of this study was to optimize the parameters involved in the production of water-soluble phytosterol microemulsions for use in the food industry. In this study, response surface methodology (RSM) was employed to model and optimize four of the processing parameters, namely, the number of cycles of high-pressure homogenization (1-9 cycles), the pressure used for high-pressure homogenization (100-500 bar), the evaporation temperature (30-70 degrees C), and the concentration ratio of microemulsions (1-5). All responses-particle size (PS), polydispersity index (PDI), and percent ethanol residual (%ER)-were well fit by a reduced cubic model obtained by multiple regression after manual elimination. The coefficient of determination (R(2)) and absolute average deviation (AAD) value for PS, PDI, and %ER were 0.9628 and 0.5398%, 0.9953 and 0.7077%, and 0.9989 and 1.0457%, respectively. The optimized processing parameters were 4.88 (approximately 5) homogenization cycles, homogenization pressure of 400 bar, evaporation temperature of 44.5 degrees C, and concentration ratio of microemulsions of 2.34 cycles (approximately 2 cycles) of high-pressure homogenization. The corresponding responses for the optimized preparation condition were a minimal particle size of 328 nm, minimal polydispersity index of 0.159, and <0.1% of ethanol residual. The chi-square test verified the model, whereby the experimental values of PS, PDI, and %ER agreed with the predicted values at a 0.05 level of significance.

  5. Simulation of value stream mapping and discrete optimization of energy consumption in modular construction

    NASA Astrophysics Data System (ADS)

    Chowdhury, Md Mukul

    With the increased practice of modularization and prefabrication, the construction industry gained the benefits of quality management, improved completion time, reduced site disruption and vehicular traffic, and improved overall safety and security. Whereas industrialized construction methods, such as modular and manufactured buildings, have evolved over decades, core techniques used in prefabrication plants vary only slightly from those employed in traditional site-built construction. With a focus on energy and cost efficient modular construction, this research presents the development of a simulation, measurement and optimization system for energy consumption in the manufacturing process of modular construction. The system is based on Lean Six Sigma principles and loosely coupled system operation to identify the non-value adding tasks and possible causes of low energy efficiency. The proposed system will also include visualization functions for demonstration of energy consumption in modular construction. The benefits of implementing this system include a reduction in the energy consumption in production cost, decrease of energy cost in the production of lean-modular construction, and increase profit. In addition, the visualization functions will provide detailed information about energy efficiency and operation flexibility in modular construction. A case study is presented to validate the reliability of the system.

  6. Superlattice design for optimal thermoelectric generator performance

    NASA Astrophysics Data System (ADS)

    Priyadarshi, Pankaj; Sharma, Abhishek; Mukherjee, Swarnadip; Muralidharan, Bhaskaran

    2018-05-01

    We consider the design of an optimal superlattice thermoelectric generator via the energy bandpass filter approach. Various configurations of superlattice structures are explored to obtain a bandpass transmission spectrum that approaches the ideal ‘boxcar’ form, which is now well known to manifest the largest efficiency at a given output power in the ballistic limit. Using the coherent non-equilibrium Green’s function formalism coupled self-consistently with the Poisson’s equation, we identify such an ideal structure and also demonstrate that it is almost immune to the deleterious effect of self-consistent charging and device variability. Analyzing various superlattice designs, we conclude that superlattice with a Gaussian distribution of the barrier thickness offers the best thermoelectric efficiency at maximum power. It is observed that the best operating regime of this device design provides a maximum power in the range of 0.32–0.46 MW/m 2 at efficiencies between 54%–43% of Carnot efficiency. We also analyze our device designs with the conventional figure of merit approach to counter support the results so obtained. We note a high zT el   =  6 value in the case of Gaussian distribution of the barrier thickness. With the existing advanced thin-film growth technology, the suggested superlattice structures can be achieved, and such optimized thermoelectric performances can be realized.

  7. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  8. Optimal global value of information trials: better aligning manufacturer and decision maker interests and enabling feasible risk sharing.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2013-05-01

    Risk sharing arrangements relate to adjusting payments for new health technologies given evidence of their performance over time. Such arrangements rely on prospective information regarding the incremental net benefit of the new technology, and its use in practice. However, once the new technology has been adopted in a particular jurisdiction, randomized clinical trials within that jurisdiction are likely to be infeasible and unethical in the cases where they would be most helpful, i.e. with current evidence of positive while uncertain incremental health and net monetary benefit. Informed patients in these cases would likely be reluctant to participate in a trial, preferring instead to receive the new technology with certainty. Consequently, informing risk sharing arrangements within a jurisdiction is problematic given the infeasibility of collecting prospective trial data. To overcome such problems, we demonstrate that global trials facilitate trialling post adoption, leading to more complete and robust risk sharing arrangements that mitigate the impact of costs of reversal on expected value of information in jurisdictions who adopt while a global trial is undertaken. More generally, optimally designed global trials offer distinct advantages over locally optimal solutions for decision makers and manufacturers alike: avoiding opportunity costs of delay in jurisdictions that adopt; overcoming barriers to evidence collection; and improving levels of expected implementation. Further, the greater strength and translatability of evidence across jurisdictions inherent in optimal global trial design reduces barriers to translation across jurisdictions characteristic of local trials. Consequently, efficiently designed global trials better align the interests of decision makers and manufacturers, increasing the feasibility of risk sharing and the expected strength of evidence over local trials, up until the point that current evidence is globally sufficient.

  9. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  10. BBD Optimization of K-ZnO Catalyst Modification Process for Heterogeneous Transesterification of Rice Bran Oil to Biodiesel

    NASA Astrophysics Data System (ADS)

    Kabo, K. S.; Yacob, A. R.; Bakar, W. A. W. A.; Buang, N. A.; Bello, A. M.; Ruskam, A.

    2016-07-01

    Environmentally benign zinc oxide (ZnO) was modified with 0-15% (wt.) potassium through wet impregnation and used in transesterification of rice bran oil (RBO) to form biodiesel. The catalyst was characterized by X-Ray powder Diffraction (XRD), its basic sites determined by back titration and Response Surface Methodology (RSM) Box-Behnken Design (BBD) was used to optimize the modification process variables on the basic sites of the catalyst. The transesterification product, biodiesel was analyzed by Nuclear Magnetic Resonance (NMR) spectroscopy. The result reveals K-modified ZnO with highly increased basic sites. Quadratic model with high regression R2 = 0.9995 was obtained from the ANOVA of modification process, optimization at maximum basic sites criterion gave optimum modification conditions of K-loading = 8.5% (wt.), calcination temperature = 480 oC and time = 4 hours with response and basic sites = 8.14 mmol/g which is in close agreement with the experimental value of 7.64 mmol/g. The catalyst was used and a value of 95.53% biodiesel conversion was obtained and effect of potassium leaching was not significant in the process

  11. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  12. Harmonic Optimization in Voltage Source Inverter for PV Application using Heuristic Algorithms

    NASA Astrophysics Data System (ADS)

    Kandil, Shaimaa A.; Ali, A. A.; El Samahy, Adel; Wasfi, Sherif M.; Malik, O. P.

    2016-12-01

    Selective Harmonic Elimination (SHE) technique is the fundamental switching frequency scheme that is used to eliminate specific order harmonics. Its application to minimize low order harmonics in a three level inverter is proposed in this paper. The modulation strategy used here is SHEPWM and the nonlinear equations, that characterize the low order harmonics, are solved using Harmony Search Algorithm (HSA) to obtain the optimal switching angles that minimize the required harmonics and maintain the fundamental at the desired value. Total Harmonic Distortion (THD) of the output voltage is minimized maintaining selected harmonics within allowable limits. A comparison has been drawn between HSA, Genetic Algorithm (GA) and Newton Raphson (NR) technique using MATLAB software to determine the effectiveness of getting optimized switching angles.

  13. Optimization of ultrasound-assisted extraction of biomass from olive trees using response surface methodology.

    PubMed

    Martínez-Patiño, José Carlos; Gullón, Beatriz; Romero, Inmaculada; Ruiz, Encarnación; Brnčić, Mladen; Žlabur, Jana Šic; Castro, Eulogio

    2018-05-26

    Olive tree pruning biomass (OTP) and olive mill leaves (OML) are the main residual lignocellulosic biomasses that are generated from olive trees. They have been proposed as a source of value-added compounds and biofuels within the biorefinery concept. In this work, the optimization of an ultrasound-assisted extraction (UAE) process was performed to extract antioxidant compounds present in OTP and OML. The effect of the three parameters, ethanol/water ratio (20, 50, 80% of ethanol concentration), amplitude percentage (30, 50, 70%) and ultrasonication time (5, 10, 15 min), on the responses of total phenolic content (TPC), total flavonoid content (TFC) and antioxidant activities (DPPH, ABTS and FRAP) were evaluated following a Box-Behnken experimental design. The optimal conditions obtained from the model, taking into account simultaneously the five responses, were quite similar for OTP and OML, with 70% amplitude and 15 min for both biomasses and a slight difference in the optimum concentration of ethanol. (54.5% versus 51.3% for OTP and OML, respectively). When comparing the antioxidant activities obtained with OTP and OML, higher values were obtained for OML (around 40% more than for OTP). The antioxidant activities reached experimentally under the optimized conditions were 31.6 mg of TE/g of OTP and 42.5 mg of TE/g of OML with the DPPH method, 66.5 mg of TE/g of OTP and 95.9 mg of TE/g of OML with the ABTS method, and 36.4 mg of TE/g of OTP and 49.7 mg of TE/g of OML with the FRAP method. Both OTP and OML could be a potential source of natural antioxidants. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps

    NASA Astrophysics Data System (ADS)

    Qiu, Hong; Deng, Wenmin

    2018-02-01

    In this paper, the optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps is considered. We introduce two kinds of environmental perturbations in this model. One is called white noise which is continuous and is described by a stochastic integral with respect to the standard Brownian motion. And the other one is jumping noise which is modeled by a Lévy process. Under some mild assumptions, the critical values between extinction and persistent in the mean of each species are established. The sufficient and necessary criteria for the existence of optimal harvesting policy are established and the optimal harvesting effort and the maximum of sustainable yield are also obtained. We utilize the ergodic method to discuss the optimal harvesting problem. The results show that white noises and Lévy noises significantly affect the optimal harvesting policy while time delays is harmless for the optimal harvesting strategy in some cases. At last, some numerical examples are introduced to show the validity of our results.

  15. Approaches for optimizing the first electronic hyperpolarizability of conjugated organic molecules

    NASA Technical Reports Server (NTRS)

    Marder, S. R.; Beratan, D. N.; Cheng, L.-T.

    1991-01-01

    Conjugated organic molecules with electron-donating and -accepting moieties can exhibit large electronic second-order nonlinearities, or first hyperpolarizabilities, beta. The present two-state, four-orbital independent-electron analysis of beta leads to the prediction that its absolute value will be maximized at a combination of donor and acceptor strengths for a given conjugated bridge. Molecular design strategies for beta optimization are proposed which give attention to the energetic manipulations of the bridge states. Experimental results have been obtained which support the validity of this approach.

  16. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  17. Optimal policy for profit maximising in an EOQ model under non-linear holding cost and stock-dependent demand rate

    NASA Astrophysics Data System (ADS)

    Pando, V.; García-Laguna, J.; San-José, L. A.

    2012-11-01

    In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.

  18. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    NASA Astrophysics Data System (ADS)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  19. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  20. Valence electronic structure of cobalt phthalocyanine from an optimally tuned range-separated hybrid functional.

    PubMed

    Brumboiu, Iulia Emilia; Prokopiou, Georgia; Kronik, Leeor; Brena, Barbara

    2017-07-28

    We analyse the valence electronic structure of cobalt phthalocyanine (CoPc) by means of optimally tuning a range-separated hybrid functional. The tuning is performed by modifying both the amount of short-range exact exchange (α) included in the hybrid functional and the range-separation parameter (γ), with two strategies employed for finding the optimal γ for each α. The influence of these two parameters on the structural, electronic, and magnetic properties of CoPc is thoroughly investigated. The electronic structure is found to be very sensitive to the amount and range in which the exact exchange is included. The electronic structure obtained using the optimal parameters is compared to gas-phase photo-electron data and GW calculations, with the unoccupied states additionally compared with inverse photo-electron spectroscopy measurements. The calculated spectrum with tuned γ, determined for the optimal value of α = 0.1, yields a very good agreement with both experimental results and with GW calculations that well-reproduce the experimental data.

  1. Differences in liver stiffness values obtained with new ultrasound elastography machines and Fibroscan: A comparative study.

    PubMed

    Piscaglia, Fabio; Salvatore, Veronica; Mulazzani, Lorenzo; Cantisani, Vito; Colecchia, Antonio; Di Donato, Roberto; Felicani, Cristina; Ferrarini, Alessia; Gamal, Nesrine; Grasso, Valentina; Marasco, Giovanni; Mazzotta, Elena; Ravaioli, Federico; Ruggieri, Giacomo; Serio, Ilaria; Sitouok Nkamgho, Joules Fabrice; Serra, Carla; Festi, Davide; Schiavone, Cosima; Bolondi, Luigi

    2017-07-01

    Whether Fibroscan thresholds can be immediately adopted for none, some or all other shear wave elastography techniques has not been tested. The aim of the present study was to test the concordance of the findings obtained from 7 of the most recent ultrasound elastography machines with respect to Fibroscan. Sixteen hepatitis C virus-related patients with fibrosis ≥2 and having reliable results at Fibroscan were investigated in two intercostal spaces using 7 different elastography machines. Coefficients of both precision (an index of data dispersion) and accuracy (an index of bias correction factors expressing different magnitudes of changes in comparison to the reference) were calculated. Median stiffness values differed among the different machines as did coefficients of both precision (range 0.54-0.72) and accuracy (range 0.28-0.87). When the average of the measurements of two intercostal spaces was considered, coefficients of precision significantly increased with all machines (range 0.72-0.90) whereas of accuracy improved more scatteredly and by a smaller degree (range 0.40-0.99). The present results showed only moderate concordance of the majority of elastography machines with the Fibroscan results, preventing the possibility of the immediate universal adoption of Fibroscan thresholds for defining liver fibrosis staging for all new machines. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  2. Statistical optimization of bioprocess parameters for enhanced gallic acid production from coffee pulp tannins by Penicillium verrucosum.

    PubMed

    Bhoite, Roopali N; Navya, P N; Murthy, Pushpa S

    2013-01-01

    Gallic acid (3,4,5-trihydroxybenzoic acid) was produced by microbial biotransformation of coffee pulp tannins by Penicillium verrucosum. Gallic acid production was optimized using response surface methodology (RSM) based on central composite rotatable design. Process parameters such as pH, moisture, and fermentation period were considered for optimization. Among the various fungi isolated from coffee by-products, Penicillium verrucosum produced 35.23 µg/g of gallic acid on coffee pulp as sole carbon source in solid-state fermentation. The optimum values of the parameters obtained from the RSM were pH 3.32, moisture 58.40%, and fermentation period of 96 hr. Gallic acid production with an increase of 4.6-fold was achieved upon optimization of the process parameters. The results optimized could be translated to 1-kg tray fermentation. High-performance liquid chromatography (HPLC) analysis and spectral studies such as mass spectroscopy (MS) and (1)H-nuclear magnetic resonance (NMR) confirmed that the bioactive compound isolated was gallic acid. Thus, coffee pulp, which is available in enormous quantity, could be used for the production of value-added products that can find avenues in food, pharmaceutical, and chemical industries.

  3. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  4. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement

    PubMed Central

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-01-01

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520

  5. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement.

    PubMed

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-09-03

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.

  6. Disturbance by optimal discrimination

    NASA Astrophysics Data System (ADS)

    Kawakubo, Ryûitirô; Koike, Tatsuhiko

    2018-03-01

    We discuss the disturbance by measurements which unambiguously discriminate between given candidate states. We prove that such an optimal measurement necessarily changes distinguishable states indistinguishable when the inconclusive outcome is obtained. The result was previously shown by Chefles [Phys. Lett. A 239, 339 (1998), 10.1016/S0375-9601(98)00064-4] under restrictions on the class of quantum measurements and on the definition of optimality. Our theorems remove these restrictions and are also applicable to infinitely many candidate states. Combining with our previous results, one can obtain concrete mathematical conditions for the resulting states. The method may have a wide variety of applications in contexts other than state discrimination.

  7. Simulation of uranium and plutonium oxides compounds obtained in plasma

    NASA Astrophysics Data System (ADS)

    Novoselov, Ivan Yu.; Karengin, Alexander G.; Babaev, Renat G.

    2018-03-01

    The aim of this paper is to carry out thermodynamic simulation of mixed plutonium and uranium oxides compounds obtained after plasma treatment of plutonium and uranium nitrates and to determine optimal water-salt-organic mixture composition as well as conditions for their plasma treatment (temperature, air mass fraction). Authors conclude that it needs to complete the treatment of nitric solutions in form of water-salt-organic mixtures to guarantee energy saving obtainment of oxide compounds for mixed-oxide fuel and explain the choice of chemical composition of water-salt-organic mixture. It has been confirmed that temperature of 1200 °C is optimal to practice the process. Authors have demonstrated that condensed products after plasma treatment of water-salt-organic mixture contains targeted products (uranium and plutonium oxides) and gaseous products are environmental friendly. In conclusion basic operational modes for practicing the process are showed.

  8. An artificial neural network controller based on MPSO-BFGS hybrid optimization for spherical flying robot

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Li, Lanfei; Sun, Hanxu

    2017-12-01

    Spherical flying robot can perform various tasks in the complex and varied environment to reduce labor costs. However, it is difficult to guarantee the stability of the spherical flying robot in the case of strong coupling and time-varying disturbance. In this paper, an artificial neural network controller (ANNC) based on MPSO-BFGS hybrid optimization algorithm is proposed. The MPSO algorithm is used to optimize the initial weights of the controller to avoid the local optimal solution. The BFGS algorithm is introduced to improve the convergence ability of the network. We use Lyapunov method to analyze the stability of ANNC. The controller is simulated under the condition of nonlinear coupling disturbance. The experimental results show that the proposed controller can obtain the expected value in shoter time compared with the other considered methods.

  9. Optimization of structures undergoing harmonic or stochastic excitation. Ph.D. Thesis; [atmospheric turbulence and white noise

    NASA Technical Reports Server (NTRS)

    Johnson, E. H.

    1975-01-01

    The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.

  10. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  11. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  12. Extraction optimization of mucilage from Basil (Ocimum basilicum L.) seeds using response surface methodology.

    PubMed

    Nazir, Sadaf; Wani, Idrees Ahmed; Masoodi, Farooq Ahmad

    2017-05-01

    Aqueous extraction of basil seed mucilage was optimized using response surface methodology. A Central Composite Rotatable Design (CCRD) for modeling of three independent variables: temperature (40-91 °C); extraction time (1.6-3.3 h) and water/seed ratio (18:1-77:1) was used to study the response for yield. Experimental values for extraction yield ranged from 7.86 to 20.5 g/100 g. Extraction yield was significantly ( P  < 0.05) affected by all the variables. Temperature and water/seed ratio were found to have pronounced effect while the extraction time was found to have minor possible effects. Graphical optimization determined the optimal conditions for the extraction of mucilage. The optimal condition predicted an extraction yield of 20.49 g/100 g at 56.7 °C, 1.6 h, and a water/seed ratio of 66.84:1. Optimal conditions were determined to obtain highest extraction yield. Results indicated that water/seed ratio was the most significant parameter, followed by temperature and time.

  13. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  14. Muffins Elaborated with Optimized Monoglycerides Oleogels: From Solid Fat Replacer Obtention to Product Quality Evaluation.

    PubMed

    Giacomozzi, Anabella S; Carrín, María E; Palla, Camila A

    2018-06-01

    This study demonstrates the effectiveness of using oleogels from high oleic sunflower oil (HOSO) and monoglycerides as solid fat replacers in a sweet bakery product. Firstly, a methodology to obtain oleogels with desired properties based on mathematical models able to describe relationships between process and product characteristics variables followed by multi-objective optimization was applied. Later, muffins were prepared with the optimized oleogels and their physicochemical and textural properties were compared with those of muffins formulated using a commercial margarine (Control) or only HOSO. Furthermore, the amount of oil released from muffins over time (1, 7, and 10 days) was measured to evaluate their stability. The replacement of commercial margarine with the optimized oleogels in muffin formulation led to the obtention of products with greater spreadability, higher specific volume, similar hardness values, and a more connected and homogeneous crumb structure. Moreover, these products showed a reduction of oil migration of around 50% in contrast to the Control muffins after 10 days of storage, which indicated that the optimized oleogels can be used satisfactorily to decrease oil loss in this sweet baked product. Fat replacement with the optimized monoglycerides oleogels not only had a positive impact on the quality of the muffins, but also allowed to improve their nutritional profile (without trans fat and low in saturated fat). The food industry demands new ways to reduce the use of saturated and trans fats in food formulations. To contribute to this search, oleogels from high oleic sunflower oil and saturated monoglycerides were prepared under optimized conditions in order to obtain a product with similar functionality to margarine, and its potential application as a semisolid fat ingredient in muffins was evaluated. Muffins formulated with oleogels showed an improved quality compare with those obtained using a commercial margarine with the added

  15. Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method

    NASA Astrophysics Data System (ADS)

    Huang, Feng; Li, Jing

    2017-12-01

    The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.

  16. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  17. Value Driven Information Processing and Fusion

    DTIC Science & Technology

    2016-03-01

    consensus approach allows a decentralized approach to achieve the optimal error exponent of the centralized counterpart, a conclusion that is signifi...SECURITY CLASSIFICATION OF: The objective of the project is to develop a general framework for value driven decentralized information processing...including: optimal data reduction in a network setting for decentralized inference with quantization constraint; interactive fusion that allows queries and

  18. Optimal ballistically captured Earth-Moon transfers

    NASA Astrophysics Data System (ADS)

    Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.

    2012-07-01

    The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.

  19. [Optimization of succinic acid fermentation with Actinobacillus succinogenes by response surface methodology].

    PubMed

    Shen, Naikun; Qin, Yan; Wang, Qingyan; Xie, Nengzhong; Mi, Huizhi; Zhu, Qixia; Liao, Siming; Huang, Ribo

    2013-10-01

    Succinic acid is an important C4 platform chemical in the synthesis of many commodity and special chemicals. In the present work, different compounds were evaluated for succinic acid production by Actinobacillus succinogenes GXAS 137. Important parameters were screened by the single factor experiment and Plackeet-Burman design. Subsequently, the highest production of succinic acid was approached by the path of steepest ascent. Then, the optimum values of the parameters were obtained by Box-Behnken design. The results show that the important parameters were glucose, yeast extract and MgCO3 concentrations. The optimum condition was as follows (g/L): glucose 70.00, yeast extract 9.20 and MgCO3 58.10. Succinic acid yield reached 47.64 g/L at the optimal condition. Succinic acid increased by 29.14% than that before the optimization (36.89 g/L). Response surface methodology was proven to be a powerful tool to optimize succinic acid production.

  20. Is optimal paddle force applied during paediatric external defibrillation?

    PubMed

    Bennetts, Sarah H; Deakin, Charles D; Petley, Graham W; Clewlow, Frank

    2004-01-01

    Optimal paddle force minimises transthoracic impedance; a factor associated with increased defibrillation success. Optimal force for the defibrillation of children < or =10 kg using paediatric paddles has previously been shown to be 2.9 kgf, and for children >10 kg using adult paddles is 5.1 kgf. We compared defibrillation paddle force applied during simulated paediatric defibrillation with these optimal values. 72 medical and nursing staff who would be expected to perform paediatric defibrillation were recruited from a University teaching hospital. Participants, blinded to the nature of the study, were asked to simulate defibrillation of an infant manikin (9 months of age) and a child manikin (6 years of age) using paediatric or adult paddles, respectively, according to guidelines. Paddle force (kgf) was measured at the time of simulated shock and compared with known optimal values. Median paddle force applied to the infant manikin was 2.8 kgf (max 9.6, min 0.6), with only 47% operators attaining optimal force. Median paddle force applied to the child manikin was 3.8 kgf (max 10.2, min 1.0), with only 24% of operators attaining optimal force. Defibrillation paddle force applied during paediatric defibrillation often falls below optimal values.

  1. Optimization design of LED heat dissipation structure based on strip fins

    NASA Astrophysics Data System (ADS)

    Xue, Lingyun; Wan, Wenbin; Chen, Qingguang; Rao, Huanle; Xu, Ping

    2018-03-01

    To solve the heat dissipation problem of LED, a radiator structure based on strip fins is designed and the method to optimize the structure parameters of strip fins is proposed in this paper. The combination of RBF neural networks and particle swarm optimization (PSO) algorithm is used for modeling and optimization respectively. During the experiment, the 150 datasets of LED junction temperature when structure parameters of number of strip fins, length, width and height of the fins have different values are obtained by ANSYS software. Then RBF neural network is applied to build the non-linear regression model and the parameters optimization of structure based on particle swarm optimization algorithm is performed with this model. The experimental results show that the lowest LED junction temperature reaches 43.88 degrees when the number of hidden layer nodes in RBF neural network is 10, the two learning factors in particle swarm optimization algorithm are 0.5, 0.5 respectively, the inertia factor is 1 and the maximum number of iterations is 100, and now the number of fins is 64, the distribution structure is 8*8, and the length, width and height of fins are 4.3mm, 4.48mm and 55.3mm respectively. To compare the modeling and optimization results, LED junction temperature at the optimized structure parameters was simulated and the result is 43.592°C which approximately equals to the optimal result. Compared with the ordinary plate-fin-type radiator structure whose temperature is 56.38°C, the structure greatly enhances heat dissipation performance of the structure.

  2. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  3. Optimization of processing parameters of amaranth grits before grinding into flour

    NASA Astrophysics Data System (ADS)

    Zharkova, I. M.; Safonova, Yu A.; Slepokurova, Yu I.

    2018-05-01

    There are the results of experimental studies about the influence of infrared treatment (IR processing) parameters of the amaranth grits before their grinding into flour on the composition and properties of the received product. Using the method called as regressionfactor analysis, the optimal conditions of the thermal processing to the amaranth grits were obtained: the belt speed of the conveyor – 0.049 m/s; temperature of amaranth grits in the tempering silo – 65.4 °C the thickness of the layer of amaranth grits on the belt is 3 - 5 mm and the lamp power is 69.2 kW/m2. The conducted researches confirmed that thermal effect to the amaranth grains in the IR setting allows getting flour with a smaller size of starch grains, with the increased water-holding ability, and with a changed value of its glycemic index. Mathematical processing of experimental data allowed establishing the dependence of the structural and technological characteristics of the amaranth flour on the IR processing parameters of amaranth grits. The obtained results are quite consistent with the experimental ones that proves the effectiveness of optimization based on mathematical planning of the experiment to determine the influence of heat treatment optimal parameters of the amaranth grits on the functional and technological properties of the flour received from it.

  4. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  5. Energy accounting and optimization for mobile systems

    NASA Astrophysics Data System (ADS)

    Dong, Mian

    Energy accounting determines how much a software process contributes to the total system energy consumption. It is the foundation for evaluating software and has been widely used by operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track every hardware use by software in a heterogeneous multi-core system like modern smartphones and tablets. In this thesis, we provide the ground truth for energy accounting based on multi-player game theory and offer the first evaluation of existing energy accounting policies, revealing their important flaws. The proposed ground truth is based on Shapley value, a single value solution to multi-player games of which four axiomatic properties are natural and self-evident to energy accounting. To obtain the Shapley value-based ground truth, one only needs to know if a process is active during the time under question and the system energy consumption during the same time. We further provide a utility optimization formulation of energy management and show, surprisingly, that energy accounting does not matter for existing energy management solutions that control the energy use of a process by giving it an energy budget, or budget based energy management (BEM). We show an optimal energy management (OEM) framework can always outperform BEM. While OEM does not require any form of energy accounting, it is related to Shapley value in that both require the system energy consumption for all possible combination of processes under question. We provide a novel system solution that meet this requirement by acquiring system energy consumption in situ for an OS scheduler period, i.e.,10 ms. We report a prototype implementation of both Shapley value-based energy accounting and OEM based scheduling. Using this prototype and smartphone workload, we experimentally demonstrate how erroneous existing energy accounting policies can

  6. Optimal Low Energy Earth-Moon Transfers

    NASA Technical Reports Server (NTRS)

    Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.

    2010-01-01

    The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.

  7. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  8. Optimization of a pH-shift control strategy for producing monoclonal antibodies in Chinese hamster ovary cell cultures using a pH-dependent dynamic model.

    PubMed

    Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro

    2018-02-01

    To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  9. Chickpea seeds germination rational parameters optimization

    NASA Astrophysics Data System (ADS)

    Safonova, Yu A.; Ivliev, M. N.; Lemeshkin, A. V.

    2018-05-01

    The paper presents the influence of chickpea seeds bioactivation parameters on their enzymatic activity experimental results. Optimal bioactivation process modes were obtained by regression-factor analysis: process temperature - 13.6 °C, process duration - 71.5 h. It was found that in the germination process, the proteolytic, amylolytic and lipolytic enzymes activity increased, and the urease enzyme activity is reduced. The dependences of enzyme activity on chickpea seeds germination conditions were obtained by mathematical processing of experimental data. The calculated data are in good agreement with the experimental ones. This confirms the optimization efficiency based on experiments mathematical planning in order to determine the enzymatic activity of chickpea seeds germination optimal parameters of bioactivated seeds.

  10. Optimizing Nutrient Uptake in Biological Transport Networks

    NASA Astrophysics Data System (ADS)

    Ronellenfitsch, Henrik; Katifori, Eleni

    2013-03-01

    Many biological systems employ complex networks of vascular tubes to facilitate transport of solute nutrients, examples include the vascular system of plants (phloem), some fungi, and the slime-mold Physarum. It is believed that such networks are optimized through evolution for carrying out their designated task. We propose a set of hydrodynamic governing equations for solute transport in a complex network, and obtain the optimal network architecture for various classes of optimizing functionals. We finally discuss the topological properties and statistical mechanics of the resulting complex networks, and examine correspondence of the obtained networks to those found in actual biological systems.

  11. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  12. Optimization of microwave-assisted extraction (MAE) of coriander phenolic antioxidants - response surface methodology approach.

    PubMed

    Zeković, Zoran; Vladić, Jelena; Vidović, Senka; Adamović, Dušan; Pavlić, Branimir

    2016-10-01

    Microwave-assisted extraction (MAE) of polyphenols from coriander seeds was optimized by simultaneous maximization of total phenolic (TP) and total flavonoid (TF) yields, as well as maximized antioxidant activity determined by 1,1-diphenyl-2-picrylhydrazyl and reducing power assays. Box-Behnken experimental design with response surface methodology (RSM) was used for optimization of MAE. Extraction time (X1 , 15-35 min), ethanol concentration (X2 , 50-90% w/w) and irradiation power (X3 , 400-800 W) were investigated as independent variables. Experimentally obtained values of investigated responses were fitted to a second-order polynomial model, and multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions. The optimal MAE conditions for simultaneous maximization of polyphenol yield and increased antioxidant activity were an extraction time of 19 min, an ethanol concentration of 63% and an irradiation power of 570 W, while predicted values of TP, TF, IC50 and EC50 at optimal MAE conditions were 311.23 mg gallic acid equivalent per 100 g dry weight (DW), 213.66 mg catechin equivalent per 100 g DW, 0.0315 mg mL(-1) and 0.1311 mg mL(-1) respectively. RSM was successfully used for multi-response optimization of coriander seed polyphenols. Comparison of optimized MAE with conventional extraction techniques confirmed that MAE provides significantly higher polyphenol yields and extracts with increased antioxidant activity. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  13. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  14. General shape optimization capability

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson

    1991-01-01

    A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.

  15. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  16. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  17. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  18. Media milling process optimization for manufacture of drug nanoparticles using design of experiments (DOE).

    PubMed

    Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj

    2015-01-01

    Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.

  19. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information.

    PubMed

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-10-27

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.

  20. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information

    PubMed Central

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-01-01

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794

  1. Optimization of the preparation conditions of ceramic products using drinking water treatment sludges.

    PubMed

    Zamora, R M Ramirez; Ayala, F Espesel; Garcia, L Chavez; Moreno, A Duran; Schouwenaars, R

    2008-11-01

    The aim of this work is to optimize, via Response Surface Methodology, the values of the main process parameters for the production of ceramic products using sludges obtained from drinking water treatment in order to valorise them. In the first experimental stage, sludges were collected from a drinking water treatment plant for characterization. In the second stage, trials were carried out to elaborate thin cross-section specimens and fired bricks following an orthogonal central composite design of experiments with three factors (sludge composition, grain size and firing temperature) and five levels. The optimization parameters (Y(1)=shrinking by firing (%), Y(2)=water absorption (%), Y(3)=density (g/cm(3)) and Y(4)=compressive strength (kg/cm(2))) were determined according to standardized analytical methods. Two distinct physicochemical processes were active during firing at different conditions in the experimental design, preventing the determination of a full response surface, which would allow direct optimization of production parameters. Nevertheless, the temperature range for the production of classical red brick was closely delimitated by the results; above this temperature, a lightweight ceramic with surprisingly high strength was produced, opening possibilities for the valorisation of a product with considerably higher added value than what was originally envisioned.

  2. Optimization of the fiber laser parameters for local high-temperature impact on metal

    NASA Astrophysics Data System (ADS)

    Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.

    2016-11-01

    This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.

  3. Adding value to laboratory medicine: a professional responsibility.

    PubMed

    Beastall, Graham H

    2013-01-01

    Laboratory medicine is a medical specialty at the centre of healthcare. When used optimally laboratory medicine generates knowledge that can facilitate patient safety, improve patient outcomes, shorten patient journeys and lead to more cost-effective healthcare. Optimal use of laboratory medicine relies on dynamic and authoritative leadership outside as well as inside the laboratory. The first responsibility of the head of a clinical laboratory is to ensure the provision of a high quality service across a wide range of parameters culminating in laboratory accreditation against an international standard, such as ISO 15189. From that essential baseline the leadership of laboratory medicine at local, national and international level needs to 'add value' to ensure the optimal delivery, use, development and evaluation of the services provided for individuals and for groups of patients. A convenient tool to illustrate added value is use of the mnemonic 'SCIENCE'. This tool allows added value to be considered in seven domains: standardisation and harmonisation; clinical effectiveness; innovation; evidence-based practice; novel applications; cost-effectiveness; and education of others. The assessment of added value in laboratory medicine may be considered against a framework that comprises three dimensions: operational efficiency; patient management; and patient behaviours. The profession and the patient will benefit from sharing examples of adding value to laboratory medicine.

  4. Optimality of profit-including prices under ideal planning.

    PubMed

    Samuelson, P A

    1973-07-01

    Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the "values" model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries.

  5. Thermal and energy battery management optimization in electric vehicles using Pontryagin's maximum principle

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando

    2014-01-01

    Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.

  6. Obtaining and characterization of La0.8Sr0.2CrO3 perovskite by the combustion method

    NASA Astrophysics Data System (ADS)

    Morales Rivera, A. M.; Gómez Cuaspud, J. A.; López, E. Vera

    2017-01-01

    This research is focused on the synthesis and characterization of a perovskite oxide based on La0.8Sr0.2CrO3 system by the combustion method. The material was obtained in order to contribute to analyse the effect of synthesis route in the obtaining of advanced anodic materials for solid oxide fuel cells (SOFC). The obtaining of solid was achieved starting from corresponding nitrate dissolutions, which were polymerized by temperature effect in presence of citric acid. The solid precursor as a foam citrate was characterized by infrared (FTIR) and ultraviolet (UV) spectroscopy, confirming the effectiveness in synthesis process. The solid was calcined in oxygen atmosphere at 800°C and characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and energy dispersive of X-ray spectroscopy (EDX) and solid state impedance spectroscopy (IS). Results confirm the obtaining of an orthorhombic solid with space group Pnma (62) and cell parameters a=5.4590Å, b=7.7310Å and c=5.5050Å. At morphological level the solid showed a heterogeneous distribution with an optimal correspondence with proposed and obtained stoichiometry. The electrical characterization, confirm a semiconductor behaviour with a value of 2.14eV Band-gap according with previous works.

  7. Formulation Development, Optimization, and In vitro - In vivo Characterization of Natamycin Loaded PEGylated Nano-lipid Carriers for Ocular Applications.

    PubMed

    Patil, Akash; Lakhani, Prit; Taskar, Pranjal; Wu, Kai-Wei; Sweeney, Corinne; Avula, Bharathi; Wang, Yan-Hong; Khan, Ikhlas A; Majumdar, Soumyajit

    2018-04-23

    Current study aimed at formulating and optimizing natamycin (NT) loaded PEGylated NLCs (NT-PEG-NLCs) using Box-Behnken Design and investigating their potential in ocular applications. Response surface methodology (RSM) computations and plots for optimization were performed using Design Expert ® software, to obtain optimum values for response variables based on the criteria of desirability. Optimized NT-PEG-NLCs had predicted values for the dependent variables not significantly different from the experimental values. NT-PEG-NLCs were characterized for their physicochemical parameters; NT's rate of permeation and flux across rabbit cornea was evaluated, in vitro; ocular tissue distribution was assessed in rabbits, in vivo. NT-PEG-NLCs were found to have optimum particle size (< 300 nm) narrow PDI, high NT entrapment and NT content. In vitro transcorneal permeability and flux of NT from NT-PEG-NLCs was significantly higher than Natacyn ® . NT-PEG-NLC (0.3%) showed improved delivery of NT across the intact cornea and provided concentrations statistically similar to the marketed suspension (5%) in inner ocular tissues, in vivo, indicating that it could be a potential alternative to the conventional suspension during the course of fungal keratitis therapy. Copyright © 2018. Published by Elsevier Inc.

  8. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  9. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  10. Optimizing the G/T ratio of the DSS-13 34-meter beam-waveguide antenna

    NASA Technical Reports Server (NTRS)

    Esquivel, M. S.

    1992-01-01

    Calculations using Physical Optics computer software were done to optimize the gain-to-noise temperature (G/T) ratio of DSS-13, the DSN's 34-m beam-waveguide antenna, at X-band for operation with the ultra-low-noise amplifier maser system. A better G/T value was obtained by using a 24.2-dB far-field-gain smooth-wall dual-mode horn than by using the standard X-band 22.5-dB-gain corrugated horn.

  11. The amount effect and marginal value.

    PubMed

    Rachlin, Howard; Arfer, Kodi B; Safin, Vasiliy; Yen, Ming

    2015-07-01

    The amount effect of delay discounting (by which the value of larger reward amounts is discounted by delay at a lower rate than that of smaller amounts) strictly implies that value functions (value as a function of amount) are steeper at greater delays than they are at lesser delays. That is, the amount effect and the difference in value functions at different delays are actually a single empirical finding. Amount effects of delay discounting are typically found with choice experiments. Value functions for immediate rewards have been empirically obtained by direct judgment. (Value functions for delayed rewards have not been previously obtained.) The present experiment obtained value functions for both immediate and delayed rewards by direct judgment and found them to be steeper when the rewards were delayed--hence, finding an amount effect with delay discounting. © Society for the Experimental Analysis of Behavior.

  12. Symbol interval optimization for molecular communication with drift.

    PubMed

    Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung

    2014-09-01

    In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.

  13. Optimization of composite box-beam structures including effects of subcomponent interactions

    NASA Technical Reports Server (NTRS)

    Ragon, Scott A.; Guerdal, Zafer; Starnes, James H., Jr.

    1995-01-01

    Minimum mass designs are obtained for a simple box beam structure subject to bending, torque and combined bending/torque load cases. These designs are obtained subject to point strain and linear buckling constraints. The present work differs from previous efforts in that special attention is payed to including the effects of subcomponent panel interaction in the optimal design process. Two different approaches are used to impose the buckling constraints. When the global approach is used, buckling constraints are imposed on the global structure via a linear eigenvalue analysis. This approach allows the subcomponent panels to interact in a realistic manner. The results obtained using this approach are compared to results obtained using a traditional, less expensive approach, called the local approach. When the local approach is used, in-plane loads are extracted from the global model and used to impose buckling constraints on each subcomponent panel individually. In the global cases, it is found that there can be significant interaction between skin, spar, and rib design variables. This coupling is weak or nonexistent in the local designs. It is determined that weight savings of up to 7% may be obtained by using the global approach instead of the local approach to design these structures. Several of the designs obtained using the linear buckling analysis are subjected to a geometrically nonlinear analysis. For the designs which were subjected to bending loads, the innermost rib panel begins to collapse at less than half the intended design load and in a mode different from that predicted by linear analysis. The discrepancy between the predicted linear and nonlinear responses is attributed to the effects of the nonlinear rib crushing load, and the parameter which controls this rib collapse failure mode is shown to be the rib thickness. The rib collapse failure mode may be avoided by increasing the rib thickness above the value obtained from the (linear analysis based

  14. Tree value system: description and assumptions.

    Treesearch

    D.G. Briggs

    1989-01-01

    TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...

  15. Optimality based repetitive controller design for track-following servo system of optical disk drives.

    PubMed

    Chen, Wentao; Zhang, Weidong

    2009-10-01

    In an optical disk drive servo system, to attenuate the external periodic disturbances induced by inevitable disk eccentricity, repetitive control has been used successfully. The performance of a repetitive controller greatly depends on the bandwidth of the low-pass filter included in the repetitive controller. However, owing to the plant uncertainty and system stability, it is difficult to maximize the bandwidth of the low-pass filter. In this paper, we propose an optimality based repetitive controller design method for the track-following servo system with norm-bounded uncertainties. By embedding a lead compensator in the repetitive controller, both the system gain at periodic signal's harmonics and the bandwidth of the low-pass filter are greatly increased. The optimal values of the repetitive controller's parameters are obtained by solving two optimization problems. Simulation and experimental results are provided to illustrate the effectiveness of the proposed method.

  16. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  17. Fractional Flow Reserve: Does a Cut-off Value add Value?

    PubMed Central

    Mohdnazri, Shah R; Keeble, Thomas R

    2016-01-01

    Fractional flow reserve (FFR) has been shown to improve outcomes when used to guide percutaneous coronary intervention (PCI). There have been two proposed cut-off points for FFR. The first was derived by comparing FFR against a series of non-invasive tests, with a value of ≤0.75 shown to predict a positive ischaemia test. It was then shown in the DEFER study that a vessel FFR value of ≥0.75 was associated with safe deferral of PCI. During the validation phase, a ‘grey zone’ for FFR values of between 0.76 and 0.80 was demonstrated, where a positive non-invasive test may still occur, but sensitivity and specificity were sub-optimal. Clinical judgement was therefore advised for values in this range. The FAME studies then moved the FFR cut-off point to ≤0.80, with a view to predicting outcomes. The ≤0.80 cut-off point has been adopted into clinical practice guidelines, whereas the lower value of ≤0.75 is no longer widely used. Here, the authors discuss the data underpinning these cut-off values and the practical implications for their use when using FFR guidance in PCI. PMID:29588700

  18. Optimization by response surface methodology of lutein recovery from paprika leaves using accelerated solvent extraction.

    PubMed

    Kang, Jae-Hyun; Kim, Suna; Moon, BoKyung

    2016-08-15

    In this study, we used response surface methodology (RSM) to optimize the extraction conditions for recovering lutein from paprika leaves using accelerated solvent extraction (ASE). The lutein content was quantitatively analyzed using a UPLC equipped with a BEH C18 column. A central composite design (CCD) was employed for experimental design to obtain the optimized combination of extraction temperature (°C), static time (min), and solvent (EtOH, %). The experimental data obtained from a twenty sample set were fitted to a second-order polynomial equation using multiple regression analysis. The adjusted coefficient of determination (R(2)) for the lutein extraction model was 0.9518, and the probability value (p=0.0000) demonstrated a high significance for the regression model. The optimum extraction conditions for lutein were temperature: 93.26°C, static time: 5 min, and solvent: 79.63% EtOH. Under these conditions, the predicted extraction yield of lutein was 232.60 μg/g. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Multi-objective optimization of riparian buffer networks; valuing present and future benefits

    EPA Science Inventory

    Multi-objective optimization has emerged as a popular approach to support water resources planning and management. This approach provides decision-makers with a suite of management options which are generated based on metrics that represent different social, economic, and environ...

  20. Role of the parameters involved in the plan optimization based on the generalized equivalent uniform dose and radiobiological implications

    NASA Astrophysics Data System (ADS)

    Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.

    2008-03-01

    We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.