Sample records for optimal sizing method

  1. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method

    PubMed Central

    Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-01-01

    Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478

  2. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    PubMed

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  3. Comparing kinetic curves in liquid chromatography

    NASA Astrophysics Data System (ADS)

    Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.

    2017-01-01

    Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.

  4. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  5. Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.

  6. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  7. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  8. Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao

    This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less

  9. Performance Analysis and Design Synthesis (PADS) computer program. Volume 2: Program description, part 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.

  10. Effect of various binning methods and ROI sizes on the accuracy of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho

    2008-03-01

    To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.

  11. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  12. Optimal placement and sizing of wind / solar based DG sources in distribution system

    NASA Astrophysics Data System (ADS)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  13. Design optimization of large-size format edge-lit light guide units

    NASA Astrophysics Data System (ADS)

    Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.

    2016-04-01

    In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.

  14. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  15. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  16. A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles

    NASA Technical Reports Server (NTRS)

    Eldred, C. H.; Gordon, S. V.

    1976-01-01

    A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.

  17. Size-guided multi-seed heuristic method for geometry optimization of clusters: Application to benzene clusters.

    PubMed

    Takeuchi, Hiroshi

    2018-05-08

    Since searching for the global minimum on the potential energy surface of a cluster is very difficult, many geometry optimization methods have been proposed, in which initial geometries are randomly generated and subsequently improved with different algorithms. In this study, a size-guided multi-seed heuristic method is developed and applied to benzene clusters. It produces initial configurations of the cluster with n molecules from the lowest-energy configurations of the cluster with n - 1 molecules (seeds). The initial geometries are further optimized with the geometrical perturbations previously used for molecular clusters. These steps are repeated until the size n satisfies a predefined one. The method locates putative global minima of benzene clusters with up to 65 molecules. The performance of the method is discussed using the computational cost, rates to locate the global minima, and energies of initial geometries. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  18. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  19. Analytical sizing methods for behind-the-meter battery storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael; Yang, Tao

    In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less

  20. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    PubMed

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  1. Utilization of group theory in studies of molecular clusters

    NASA Astrophysics Data System (ADS)

    Ocak, Mahir E.

    The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.

  2. The research on the mean shift algorithm for target tracking

    NASA Astrophysics Data System (ADS)

    CAO, Honghong

    2017-06-01

    The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.

  3. Automating Structural Analysis of Spacecraft Vehicles

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2004-01-01

    A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  4. Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.

  5. The role of size of input box, location of input box, input method and display size in Chinese handwriting performance and preference on mobile devices.

    PubMed

    Chen, Zhe; Rau, Pei-Luen Patrick

    2017-03-01

    This study presented two experiments on Chinese handwriting performance (time, accuracy, the number of protruding strokes and number of rewritings) and subjective ratings (mental workload, satisfaction, and preference) on mobile devices. Experiment 1 evaluated the effects of size of the input box, input method and display size on Chinese handwriting performance and preference. It was indicated that the optimal input sizes were 30.8 × 30.8 mm, 46.6 × 46.6 mm, 58.9 × 58.9 mm and 84.6 × 84.6 mm for devices with 3.5-inch, 5.5-inch, 7.0-inch and 9.7-inch display sizes, respectively. Experiment 2 proved the significant effects of location of the input box, input method and display size on Chinese handwriting performance and subjective ratings. It was suggested that the optimal location was central regardless of display size and input method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.

    PubMed

    Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting

    2017-06-01

    Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.

  7. Taguchi optimization: Case study of gold recovery from amalgamation tailing by using froth flotation method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Aji, B. B.; Sumardi, S.; Mufakir, F. R.; Junaidi, A.; Nurjaman, F.; Karna, Aziza, Aulia

    2017-01-01

    Gold amalgamation process was widely used to treat gold ore. This process produces the tailing or amalgamation solid waste, which still contains gold at 8-9 ppm. Froth flotation is one of the promising methods to beneficiate gold from this tailing. However, this process requires optimal conditions which depends on the type of raw material. In this study, Taguchi method was used to optimize the optimum conditions of the froth flotation process. The Taguchi optimization shows that the gold recovery was strongly influenced by the particle size which is the best particle size at 150 mesh followed by the Potassium amyl xanthate concentration, pH and pine oil concentration at 1133.98, 4535.92 and 68.04 gr/ton amalgamation tailing, respectively.

  8. [Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin

    2008-09-01

    In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.

  9. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1976-01-01

    Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.

  10. A logical approach to optimize the nanostructured lipid carrier system of irinotecan: efficient hybrid design methodology

    NASA Astrophysics Data System (ADS)

    Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama

    2013-01-01

    Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.

  11. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  12. Review of dynamic optimization methods in renewable natural resource management

    USGS Publications Warehouse

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  13. Design of piezoelectric transformer for DC/DC converter with stochastic optimization method

    NASA Astrophysics Data System (ADS)

    Vasic, Dejan; Vido, Lionel

    2016-04-01

    Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.

  14. Synthesis of aircraft structures using integrated design and analysis methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Goetz, R. C.

    1978-01-01

    A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.

  15. Optimization of LDL targeted nanostructured lipid carriers of 5-FU by a full factorial design.

    PubMed

    Andalib, Sare; Varshosaz, Jaleh; Hassanzadeh, Farshid; Sadeghi, Hojjat

    2012-01-01

    Nanostructured lipid carriers (NLC) are a mixture of solid and liquid lipids or oils as colloidal carrier systems that lead to an imperfect matrix structure with high ability for loading water soluble drugs. The aim of this study was to find the best proportion of liquid and solid lipids of different types for optimization of the production of LDL targeted NLCs used in carrying 5-Fu by the emulsification-solvent evaporation method. The influence of the lipid type, cholesterol or cholesteryl stearate for targeting LDL receptors, oil type (oleic acid or octanol), lipid and oil% on particle size, surface charge, drug loading efficiency, and drug released percent from the NLCs were studied by a full factorial design. The NLCs prepared by 54.5% cholesterol and 25% of oleic acid, showed optimum results with particle size of 105.8 nm, relatively high zeta potential of -25 mV, drug loading efficiency of 38% and release efficiency of about 40%. Scanning electron microscopy of nanoparticles confirmed the results of dynamic light scattering method used in measuring the particle size of NLCs. The optimization method by a full factorial statistical design is a useful optimization method for production of nanostructured lipid carriers.

  16. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  17. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  18. Multi-Objective Programming for Lot-Sizing with Quantity Discount

    NASA Astrophysics Data System (ADS)

    Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung

    2011-11-01

    Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.

  19. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  20. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  1. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design

    PubMed Central

    Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.

    2015-01-01

    Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281

  2. Synthesis of MSnO{sub 3} (M = Ba, Sr) nanoparticles by reverse micelle method and particle size distribution analysis by whole powder pattern modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Jahangeer; Blakely, Colin K.; Bruno, Shaun R.

    2012-09-15

    Highlights: ► BaSnO{sub 3} and SrSnO{sub 3} nanoparticles synthesized using the reverse micelle method. ► Particle size and size distribution studied by whole powder pattern modeling. ► Nanoparticles are of optimal size for investigation in dye-sensitized solar cells. -- Abstract: Light-to-electricity conversion efficiency in dye-sensitized solar cells critically depends not only on the dye molecule, semiconducting material and redox shuttle selection but also on the particle size and particle size distribution of the semiconducting photoanode. In this study, nanocrystalline BaSnO{sub 3} and SrSnO{sub 3} particles have been synthesized using the microemulsion method. Particle size distribution was studied by whole powdermore » pattern modeling which confirmed narrow particle size distribution with an average size of 18.4 ± 8.3 nm for SrSnO{sub 3} and 15.8 ± 4.2 nm for BaSnO{sub 3}. These values are in close agreement with results of transmission electron microscopy. The prepared materials have optimal microstructure for successive investigation in dye-sensitized solar cells.« less

  3. Ortho Image and DTM Generation with Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.

  4. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  5. Method to optimize patch size based on spatial frequency response in image rendering of the light field

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui

    2018-05-01

    A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.

  6. A thermally driven differential mutation approach for the structural optimization of large atomic systems

    NASA Astrophysics Data System (ADS)

    Biswas, Katja

    2017-09-01

    A computational method is presented which is capable to obtain low lying energy structures of topological amorphous systems. The method merges a differential mutation genetic algorithm with simulated annealing. This is done by incorporating a thermal selection criterion, which makes it possible to reliably obtain low lying minima with just a small population size and is suitable for multimodal structural optimization. The method is tested on the structural optimization of amorphous graphene from unbiased atomic starting configurations. With just a population size of six systems, energetically very low structures are obtained. While each of the structures represents a distinctly different arrangement of the atoms, their properties, such as energy, distribution of rings, radial distribution function, coordination number, and distribution of bond angles, are very similar.

  7. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  8. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Optimal word sizes for dissimilarity measures and estimation of the degree of dissimilarity between DNA sequences.

    PubMed

    Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An

    2005-11-15

    Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu

  10. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  11. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  12. Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments

    NASA Technical Reports Server (NTRS)

    Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.

    1975-01-01

    Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.

  13. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  14. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  15. Packing Optimization of Sorbent Bed Containing Dissimilar and Irregular Shaped Media

    NASA Technical Reports Server (NTRS)

    Holland, Nathan; Guttromson, Jayleen; Piowaty, Hailey

    2011-01-01

    The Fire Cartridge is a packed bed air filter with two different and separate layers of media designed to provide respiratory protection from combustion products after a fire event on the International Space Station (ISS). The first layer of media is a carbon monoxide catalyst and the second layer of media is universal carbon. During development of Fire Cartridge prototypes, the two media beds were noticed to have shifted inside the cartridge. The movement of media within the cartridge can cause mixing of the bed layers, air voids, and channeling, which could cause preferential air flow and allow contaminants to pass through without removal. An optimally packed bed mitigates these risks and ensures effective removal of contaminants from the air. In order to optimally pack each layer, vertical, horizontal, and orbital agitations were investigated and a packed bulk density was calculated for each method. Packed bulk density must be calculated for each media type to accommodate variations in particle size, shape, and density. Additionally, the optimal vibration parameters must be re-evaluated for each batch of media due to variations in particle size distribution between batches. For this application it was determined that orbital vibrations achieve an optimal pack density and the two media layers can be packed by the same method. Another finding was media with a larger size distribution of particles achieve an optimal bed pack easier than media with a smaller size distribution of particles.

  16. Analysis and optimization of cross-immunity epidemic model on complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Zhang, Hao; Wu, Yin-Hua; Feng, Wei-Qiang; Zhang, Jian

    2015-09-01

    There are various infectious diseases in real world, and these diseases often spread on a network of population and compete for the limited hosts. Cross-immunity is an important disease competing pattern, which has attracted the attention of many researchers. In this paper, we discovered an important conclusion for two cross-immunity epidemics on a network. When the infectious ability of the second epidemic takes a fixed value, the infectious ability of the first epidemic has an optimal value which minimizes the sum of the infection sizes of the two epidemics. We also proposed a simple mathematical analysis method for the infection size of the second epidemic using the cavity method. The proposed method and conclusion are verified by simulation results. Minor inaccuracies of the existing mathematical methods for the infection size of the second epidemic are also found and discussed in experiments, which have not been noticed in existing research.

  17. Construction of pore network models for Berea and Fontainebleau sandstones using non-linear programing and optimization techniques

    NASA Astrophysics Data System (ADS)

    Sharqawy, Mostafa H.

    2016-12-01

    Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.

  18. Strategies for global optimization in photonics design.

    PubMed

    Vukovic, Ana; Sewell, Phillip; Benson, Trevor M

    2010-10-01

    This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.

  19. Optimization and evaluation of asymmetric flow field-flow fractionation of silver nanoparticles.

    PubMed

    Loeschner, Katrin; Navratilova, Jana; Legros, Samuel; Wagner, Stephan; Grombe, Ringo; Snell, James; von der Kammer, Frank; Larsen, Erik H

    2013-01-11

    Asymmetric flow field-flow fractionation (AF(4)) in combination with on-line optical detection and mass spectrometry is one of the most promising methods for separation and quantification of nanoparticles (NPs) in complex matrices including food. However, to obtain meaningful results regarding especially the NP size distribution a number of parameters influencing the separation need to be optimized. This paper describes the development of a separation method for polyvinylpyrrolidone-stabilized silver nanoparticles (AgNPs) in aqueous suspension. Carrier liquid composition, membrane material, cross flow rate and spacer height were shown to have a significant influence on the recoveries and retention times of the nanoparticles. Focus time and focus flow rate were optimized with regard to minimum elution of AgNPs in the void volume. The developed method was successfully tested for injected masses of AgNPs from 0.2 to 5.0 μg. The on-line combination of AF(4) with detection methods including ICP-MS, light absorbance and light scattering was helpful because each detector provided different types of information about the eluting NP fraction. Differences in the time-resolved appearance of the signals obtained by the three detection methods were explained based on the physical origin of the signal. Two different approaches for conversion of retention times of AgNPs to their corresponding sizes and size distributions were tested and compared, namely size calibration with polystyrene nanoparticles (PSNPs) and calculations of size based on AF(4) theory. Fraction collection followed by transmission electron microscopy was performed to confirm the obtained size distributions and to obtain further information regarding the AgNP shape. Characteristics of the absorbance spectra were used to confirm the presence of non-spherical AgNP. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  1. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  2. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  3. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  4. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  5. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  6. Heuristic Implementation of Dynamic Programming for Matrix Permutation Problems in Combinatorial Data Analysis

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich; Stahl, Stephanie

    2008-01-01

    Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30x30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation…

  7. ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng

    In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the onemore » from LOCO (Linear Optics from Closed Orbits) response matrix correction.« less

  8. Optimal deployment of thermal energy storage under diverse economic and climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael

    2014-04-01

    This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less

  9. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  10. Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data

    NASA Astrophysics Data System (ADS)

    Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.

  11. Extensive Diminution of Particle Size and Amorphization of a Crystalline Drug Attained by Eminent Technology of Solid Dispersion: A Comparative Study.

    PubMed

    Singh, Gurjeet; Sharma, Shailesh; Gupta, Ghanshyam Das

    2017-07-01

    The present study emphasized on the use of solid dispersion technology to triumph over the drawbacks associated with the highly effective antihypertensive drug telmisartan using different polymers (poloxamer 188 and locust bean gum) and methods (modified solvent evaporation and lyophilization). It is based on the comparison between selected polymers and methods for enhancing solubility through particle size reduction. The results showed different profiles for particle size, solubility, and dissolution of formulated amorphous systems depicting the great influence of polymer/method used. The resulting amorphous solid dispersions were characterized using x-ray diffraction (XRD), differential scanning calorimetry, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and particle size analysis. The optimized solid dispersion (TEL 19) prepared with modified locust bean gum using lyophilization technique showed reduced particle size of 184.5 ± 3.7 nm and utmost solubility of 702 ± 5.47 μg/mL in water, which is quite high as compared to the pure drug (≤1 μg/mL). This study showed that the appropriate selection of carrier may lead to the development of solid dispersion formulation with desired solubility and dissolution profiles. The optimized dispersion was later formulated into fast-dissolving tablets, and further optimization was done to obtain the tablets with desired properties.

  12. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  13. Optimal synthesis and characterization of Ag nanofluids by electrical explosion of wires in liquids

    PubMed Central

    2011-01-01

    Silver nanoparticles were produced by electrical explosion of wires in liquids with no additive. In this study, we optimized the fabrication method and examined the effects of manufacturing process parameters. Morphology and size of the Ag nanoparticles were determined using transmission electron microscopy and field-emission scanning electron microscopy. Size and zeta potential were analyzed using dynamic light scattering. A response optimization technique showed that optimal conditions were achieved when capacitance was 30 μF, wire length was 38 mm, liquid volume was 500 mL, and the liquid type was deionized water. The average Ag nanoparticle size in water was 118.9 nm and the zeta potential was -42.5 mV. The critical heat flux of the 0.001-vol.% Ag nanofluid was higher than pure water. PMID:21711757

  14. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  15. Configuration-shape-size optimization of space structures by material redistribution

    NASA Technical Reports Server (NTRS)

    Vandenbelt, D. N.; Crivelli, L. A.; Felippa, C. A.

    1993-01-01

    This project investigates the configuration-shape-size optimization (CSSO) of orbiting and planetary space structures. The project embodies three phases. In the first one the material-removal CSSO method introduced by Kikuchi and Bendsoe (KB) is further developed to gain understanding of finite element homogenization techniques as well as associated constrained optimization algorithms that must carry along a very large number (thousands) of design variables. In the CSSO-KB method an optimal structure is 'carved out' of a design domain initially filled with finite elements, by allowing perforations (microholes) to develop, grow and merge. The second phase involves 'materialization' of space structures from the void, thus reversing the carving process. The third phase involves analysis of these structures for construction and operational constraints, with emphasis in packaging and deployment. The present paper describes progress in selected areas of the first project phase and the start of the second one.

  16. Optimization design combined with coupled structural-electrostatic analysis for the electrostatically controlled deployable membrane reflector

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Yang, Guigeng; Zhang, Yiqun

    2015-01-01

    The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.

  17. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  18. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  19. Determination of a temperature sensor location for monitoring weld pool size in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boo, K.S.; Cho, H.S.

    1994-11-01

    This paper describes a method of determining the optimal sensor location to measure weldment surface temperature, which has a close correlation with weld pool size in the gas metal arc (GMA) welding process. Due to the inherent complexity and nonlinearity in the GMA welding process, the relationship between the weldment surface temperature and the weld pool size varies with the point of measurement. This necessitates an optimal selection of the measurement point to minimize the process nonlinearity effect in estimating the weld pool size from the measured temperature. To determine the optimal sensor location on the top surface of themore » weldment, the correlation between the measured temperature and the weld pool size is analyzed. The analysis is done by calculating the correlation function, which is based upon an analytical temperature distribution model. To validate the optimal sensor location, a series of GMA bead-on-plate welds are performed on a medium-carbon steel under various welding conditions. A comparison study is given in detail based upon the simulation and experimental results.« less

  20. Optimal design of a plot cluster for monitoring

    Treesearch

    Charles T. Scott

    1993-01-01

    Traveling costs incurred during extensive forest surveys make cluster sampling cost-effective. Clusters are specified by the type of plots, plot size, number of plots, and the distance between plots within the cluster. A method to determine the optimal cluster design when different plot types are used for different forest resource attributes is described. The method...

  1. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  2. Multi-parameter optimization of piezoelectric actuators for multi-mode active vibration control of cylindrical shells

    NASA Astrophysics Data System (ADS)

    Hu, K. M.; Li, Hua

    2018-07-01

    A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.

  3. Determining the optimal number of Kanban in multi-products supply chain system

    NASA Astrophysics Data System (ADS)

    Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan

    2010-02-01

    Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.

  4. Swarm intelligence-based approach for optimal design of CMOS differential amplifier and comparator circuit using a hybrid salp swarm algorithm

    NASA Astrophysics Data System (ADS)

    Asaithambi, Sasikumar; Rajappa, Muthaiah

    2018-05-01

    In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.

  5. Swarm intelligence-based approach for optimal design of CMOS differential amplifier and comparator circuit using a hybrid salp swarm algorithm.

    PubMed

    Asaithambi, Sasikumar; Rajappa, Muthaiah

    2018-05-01

    In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.

  6. Preparation, characterization and optimization of sildenafil citrate loaded PLGA nanoparticles by statistical factorial design

    PubMed Central

    2013-01-01

    Background and the aim of the study The objective of the present study was to formulate and optimize nanoparticles (NPs) of sildenafil-loaded poly (lactic-co-glycolic acid) (PLGA) by double emulsion solvent evaporation (DESE) method. The relationship between design factors and experimental data was evaluated using response surface methodology. Method A Box-Behnken design was made considering the mass ratio of drug to polymer (D/P), the volumetric proportion of the water to oil phase (W/O) and the concentration of polyvinyl alcohol (PVA) as the independent agents. PLGA-NPs were successfully prepared and the size (nm), entrapment efficiency (EE), drug loading (DL) and cumulative release of drug from NPs post 1 and 8 hrs were assessed as the responses. Results The NPs were prepared in a spherical shape and the sizes range of 240 to 316 nm. The polydispersity index of size was lower than 0.5 and the EE (%) and DL (%) varied between 14-62% and 2-6%, respectively. The optimized formulation with a desirability factor of 0.9 was selected and characterized. This formulation demonstrated the particle size of 270 nm, EE of 55%, DL of 3.9% and cumulative drug release of 79% after 12 hrs. In vitro release studies showed a burst release at the initial stage followed by a sustained release of sildenafil from NPs up to 12 hrs. The release kinetic of the optimized formulation was fitted to Higuchi model. Conclusions Sildenafil citrate NPs with small particle size, lipophilic feature, high entrapment efficiency and good loading capacity is produced by this method. Characterization of optimum formulation, provided by an evaluation of experimental data, showed no significant difference between calculated and measured data. PMID:24355133

  7. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics

    NASA Astrophysics Data System (ADS)

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.

    2018-02-01

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  8. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics.

    PubMed

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L

    2018-02-07

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  9. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  10. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Estimating Scale Economies and the Optimal Size of School Districts: A Flexible Form Approach

    ERIC Educational Resources Information Center

    Schiltz, Fritz; De Witte, Kristof

    2017-01-01

    This paper investigates estimation methods to model the relationship between school district size, costs per student and the organisation of school districts. We show that the assumptions on the functional form strongly affect the estimated scale economies and offer two possible solutions to allow for more flexibility in the estimation method.…

  12. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  13. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, Heung-Rae

    1997-01-01

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.

  14. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2012-01-01

    The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.

  15. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    PubMed

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  16. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks

    PubMed Central

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-01-01

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062

  17. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  18. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  19. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  20. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  1. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  2. Integrated topology and shape optimization in structural design

    NASA Technical Reports Server (NTRS)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  3. Optimizing Aspect-Oriented Mechanisms for Embedded Applications

    NASA Astrophysics Data System (ADS)

    Hundt, Christine; Stöhr, Daniel; Glesner, Sabine

    As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.

  4. Calculating an optimal box size for ligand docking and virtual screening against experimental and predicted binding pockets.

    PubMed

    Feinstein, Wei P; Brylinski, Michal

    2015-01-01

    Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.

  5. Simulation and optimum design of hybrid solar-wind and solar-wind-diesel power generation systems

    NASA Astrophysics Data System (ADS)

    Zhou, Wei

    Solar and wind energy systems are considered as promising power generating sources due to its availability and topological advantages in local power generations. However, a drawback, common to solar and wind options, is their unpredictable nature and dependence on weather changes, both of these energy systems would have to be oversized to make them completely reliable. Fortunately, the problems caused by variable nature of these resources can be partially overcome by integrating these two resources in a proper combination to form a hybrid system. However, with the increased complexity in comparison with single energy systems, optimum design of hybrid system becomes more complicated. In order to efficiently and economically utilize the renewable energy resources, one optimal sizing method is necessary. This thesis developed an optimal sizing method to find the global optimum configuration of stand-alone hybrid (both solar-wind and solar-wind-diesel) power generation systems. By using Genetic Algorithm (GA), the optimal sizing method was developed to calculate the system optimum configuration which offers to guarantee the lowest investment with full use of the PV array, wind turbine and battery bank. For the hybrid solar-wind system, the optimal sizing method is developed based on the Loss of Power Supply Probability (LPSP) and the Annualized Cost of System (ACS) concepts. The optimization procedure aims to find the configuration that yields the best compromise between the two considered objectives: LPSP and ACS. The decision variables, which need to be optimized in the optimization process, are the PV module capacity, wind turbine capacity, battery capacity, PV module slope angle and wind turbine installation height. For the hybrid solar-wind-diesel system, minimization of the system cost is achieved not only by selecting an appropriate system configuration, but also by finding a suitable control strategy (starting and stopping point) of the diesel generator. The optimal sizing method was developed to find the system optimum configuration and settings that can achieve the custom-required Renewable Energy Fraction (fRE) of the system with minimum Annualized Cost of System (ACS). Du to the need for optimum design of the hybrid systems, an analysis of local weather conditions (solar radiation and wind speed) was carried out for the potential installation site, and mathematical simulation of the hybrid systems' components was also carried out including PV array, wind turbine and battery bank. By statistically analyzing the long-term hourly solar and wind speed data, Hong Kong area is found to have favorite solar and wind power resources compared with other areas, which validates the practical applications in Hong Kong and Guangdong area. Simulation of PV array performance includes three main parts: modeling of the maximum power output of the PV array, calculation of the total solar radiation on any tilted surface with any orientations, and PV module temperature predictions. Five parameters are introduced to account for the complex dependence of PV array performance upon solar radiation intensities and PV module temperatures. The developed simulation model was validated by using the field-measured data from one existing building-integrated photovoltaic system (BIPV) in Hong Kong, and good simulation performance of the model was achieved. Lead-acid batteries used in hybrid systems operate under very specific conditions, which often cause difficulties to predict when energy will be extracted from or supplied to the battery. In this thesis, the lead-acid battery performance is simulated by three different characteristics: battery state of charge (SOC), battery floating charge voltage and the expected battery lifetime. Good agreements were found between the predicted values and the field-measured data of a hybrid solar-wind project. At last, one 19.8kW hybrid solar-wind power generation project, designed by the optimal sizing method and set up to supply power for a telecommunication relay station on a remote island of Guangdong province, was studied. Simulation and experimental results about the operating performances and characteristics of the hybrid solar-wind project have demonstrated the feasibility and accuracy of the recommended optimal sizing method developed in this thesis.

  6. Study on light weight design of truss structures of spacecrafts

    NASA Astrophysics Data System (ADS)

    Zeng, Fuming; Yang, Jianzhong; Wang, Jian

    2015-08-01

    Truss structure is usually adopted as the main structure form for spacecrafts due to its high efficiency in supporting concentrated loads. Light-weight design is now becoming the primary concern during conceptual design of spacecrafts. Implementation of light-weight design on truss structure always goes through three processes: topology optimization, size optimization and composites optimization. During each optimization process, appropriate algorithm such as the traditional optimality criterion method, mathematical programming method and the intelligent algorithms which simulate the growth and evolution processes in nature will be selected. According to the practical processes and algorithms, combined with engineering practice and commercial software, summary is made for the implementation of light-weight design on truss structure for spacecrafts.

  7. Evaluation of various parameters of calcium-alginate immobilization method for enhanced alkaline protease production by Bacillus licheniformis NCIM-2042 using statistical methods.

    PubMed

    Potumarthi, Ravichandra; Subhakar, Ch; Pavani, A; Jetty, Annapurna

    2008-04-01

    Calcium-alginate immobilization method for the production of alkaline protease by Bacillus licheniformis NCIM-2042 was optimized statistically. Four variables, such as sodium-alginate concentration, calcium chloride concentration, inoculum size and agitation speed were optimized by 2(4) full factorial central composite design and subsequent analysis and model validation by a second-order regression equation. Eleven carbon, 11 organic nitrogen and seven inorganic nitrogen sources were screened by two-level Plackett-Burman design for maximum alkaline protease production by using optimized immobilized conditions. The levels of four variables, such as Na-alginate 2.78%; CaCl(2), 2.15%; inoculum size, 8.10% and agitation, 139 rpm were found to be optimum for maximal production of protease. Glucose, soybean meal and ammonium sulfate were resulted in maximum protease production at 644 U/ml, 720 U/ml, and 806 U/ml when screened for carbon, organic nitrogen and inorganic nitrogen sources, respectively, using optimized immobilization conditions. Repeated fed batch mode of operation, using optimized immobilized conditions, resulted in continuous operation for 12 cycles without disintegration of beads. Cross-sectional scanning electron microscope images have shown the growth pattern of B. licheniformis in Ca-alginate immobilized beads.

  8. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  9. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  10. Sensitive and molecular size-selective detection of proteins using a chip-based and heteroliganded gold nanoisland by localized surface plasmon resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Hong, Surin; Lee, Suseung; Yi, Jongheop

    2011-04-01

    A highly sensitive and molecular size-selective method for the detection of proteins using heteroliganded gold nanoislands and localized surface plasmon resonance (LSPR) is described. Two different heteroligands with different chain lengths (3-mercaptopionicacid and decanethiol) were used in fabricating nanoholes for the size-dependent separation of a protein in comparison with its aggregate. Their ratios on gold nanoisland were optimized for the sensitive detection of superoxide dismutase (SOD1). This protein has been implicated in the pathology of amyotrophic lateral sclerosis (ALS). Upon exposure of the optimized gold nanoisland to a solution of SOD1 and aggregates thereof, changes in the LSPR spectra were observed which are attributed to the size-selective and covalent chemical binding of SOD1 to the nanoholes. With a lower detection limit of 1.0 ng/ml, the method can be used to selectively detect SOD1 in the presence of aggregates at the molecular level.

  11. Achieving optimal growth: lessons from simple metabolic modules

    NASA Astrophysics Data System (ADS)

    Goyal, Sidhartha; Chen, Thomas; Wingreen, Ned

    2009-03-01

    Metabolism is a universal property of living organisms. While the metabolic network itself has been well characterized, the logic of its regulation remains largely mysterious. Recent work has shown that growth rates of microorganisms, including the bacterium Escherichia coli, correlate well with optimal growth rates predicted by flux-balance analysis (FBA), a constraint-based computational method. How difficult is it for cells to achieve optimal growth? Our analysis of representative metabolic modules drawn from real metabolism shows that, in all cases, simple feedback inhibition allows nearly optimal growth. Indeed, product-feedback inhibition is found in every biosynthetic pathway and constitutes about 80% of metabolic regulation. However, we find that product-feedback systems designed to approach optimal growth necessarily produce large pool sizes of metabolites, with potentially detrimental effects on cells via toxicity and osmotic imbalance. Interestingly, the sizes of metabolite pools can be strongly restricted if the feedback inhibition is ultrasensitive (i.e. with high Hill coefficient). The need for ultrasensitive mechanisms to limit pool sizes may therefore explain some of the ubiquitous, puzzling complexity found in metabolic feedback regulation at both the transcriptional and post-transcriptional levels.

  12. Exploiting Size-Dependent Drag and Magnetic Forces for Size-Specific Separation of Magnetic Nanoparticles

    PubMed Central

    Rogers, Hunter B.; Anani, Tareq; Choi, Young Suk; Beyers, Ronald J.; David, Allan E.

    2015-01-01

    Realizing the full potential of magnetic nanoparticles (MNPs) in nanomedicine requires the optimization of their physical and chemical properties. Elucidation of the effects of these properties on clinical diagnostic or therapeutic properties, however, requires the synthesis or purification of homogenous samples, which has proved to be difficult. While initial simulations indicated that size-selective separation could be achieved by flowing magnetic nanoparticles through a magnetic field, subsequent in vitro experiments were unable to reproduce the predicted results. Magnetic field-flow fractionation, however, was found to be an effective method for the separation of polydisperse suspensions of iron oxide nanoparticles with diameters greater than 20 nm. While similar methods have been used to separate magnetic nanoparticles before, no previous work has been done with magnetic nanoparticles between 20 and 200 nm. Both transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis were used to confirm the size of the MNPs. Further development of this work could lead to MNPs with the narrow size distributions necessary for their in vitro and in vivo optimization. PMID:26307980

  13. Design optimization and tolerance analysis of a spot-size converter for the taper-assisted vertical integration platform in InP.

    PubMed

    Tolstikhin, Valery; Saeidi, Shayan; Dolgaleva, Ksenia

    2018-05-01

    We report on the design optimization and tolerance analysis of a multistep lateral-taper spot-size converter based on indium phosphide (InP), performed using the Monte Carlo method. Being a natural fit to (and a key building block of) the regrowth-free taper-assisted vertical integration platform, such a spot-size converter enables efficient and displacement-tolerant fiber coupling to InP-based photonic integrated circuits at a wavelength of 1.31 μm. An exemplary four-step lateral-taper design featuring 0.35 dB coupling loss at optimal alignment of a standard single-mode fiber; ≥7  μm 1 dB displacement tolerance in any direction in a facet plane; and great stability against manufacturing variances is demonstrated.

  14. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, H.R.

    1997-11-18

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.

  15. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  16. Optimization of composite wood structural components : processing and design choices

    Treesearch

    Theodore L. Laufenberg

    1985-01-01

    Decreasing size and quality of the world's forest resources are responsible for interest in producing composite wood structural components. Process and design optimization methods are offered in this paper. Processing concepts for wood composite structural products are reviewed to illustrate manufacturing boundaries and areas of high potential. Structural...

  17. Storage Optimization of Educational System Data

    ERIC Educational Resources Information Center

    Boja, Catalin

    2006-01-01

    There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…

  18. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  19. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  20. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    NASA Astrophysics Data System (ADS)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  1. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  2. Fuzzy control based engine sizing optimization for a fuel cell/battery hybrid mini-bus

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Sohn, Young-Jun; Lee, Won-Yong; Kim, Chang-Soo

    The fuel cell/battery hybrid vehicle has been focused for the alternative engine of the existing internal-combustion engine due to the following advantages of the fuel cell and the battery. Firstly, the fuel cell is highly efficient and eco-friendly. Secondly, the battery has the fast response for the changeable power demand. However, the competitive efficiency of the hybrid fuel cell vehicle is necessary to successfully alternate the conventional vehicles with the fuel cell hybrid vehicle. The most relevant factor which affects the overall efficiency of the hybrid fuel cell vehicle is the relative engine sizing between the fuel cell and the battery. Therefore the design method to optimize the engine sizing of the fuel cell hybrid vehicle has been proposed. The target system is the fuel cell/battery hybrid mini-bus and its power distribution is controlled based on the fuzzy logic. The optimal engine sizes are determined based on the simulator developed in this paper. The simulator includes the several models for the fuel cell, the battery, and the major balance of plants. After the engine sizing, the system efficiency and the stability of the power distribution are verified based on the well-known driving schedule. Consequently, the optimally designed mini-bus shows good performance.

  3. MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant

    2014-01-01

    Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less

  4. Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles

    PubMed Central

    Sarwar, A.; Nemirovski, A.; Shapiro, B.

    2011-01-01

    Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell’s equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm3 volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm3), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths. PMID:23335834

  5. Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles.

    PubMed

    Sarwar, A; Nemirovski, A; Shapiro, B

    2012-03-01

    Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell's equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm(3) volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm(3)), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths.

  6. Optomechanical study and optimization of cantilever plate dynamics

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1995-06-01

    Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.

  7. Modified dwell time optimization model and its applications in subaperture polishing.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-05-20

    The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200  mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920  mm Zerodur paraboloid and Φ100  mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8  nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.

  8. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  9. Multiple-hopping trajectories near a rotating asteroid

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Zhang, Tian-Jiao; Li, Zhao; Li, Heng-Nian

    2017-03-01

    We present a study of the transfer orbits connecting landing points of irregular-shaped asteroids. The landing points do not touch the surface of the asteroids and are chosen several meters above the surface. The ant colony optimization technique is used to calculate the multiple-hopping trajectories near an arbitrary irregular asteroid. This new method has three steps which are as follows: (1) the search of the maximal clique of candidate target landing points; (2) leg optimization connecting all landing point pairs; and (3) the hopping sequence optimization. In particular this method is applied to asteroids 433 Eros and 216 Kleopatra. We impose a critical constraint on the target landing points to allow for extensive exploration of the asteroid: the relative distance between all the arrived target positions should be larger than a minimum allowed value. Ant colony optimization is applied to find the set and sequence of targets, and the differential evolution algorithm is used to solve for the hopping orbits. The minimum-velocity increment tours of hopping trajectories connecting all the landing positions are obtained by ant colony optimization. The results from different size asteroids indicate that the cost of the minimum velocity-increment tour depends on the size of the asteroids.

  10. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  11. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popeski-Dimovski, Riste

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  12. A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts

    DTIC Science & Technology

    2015-04-30

    fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values

  13. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  14. Validation of the Gatortail method for accurate sizing of pulmonary vessels from 3D medical images.

    PubMed

    O'Dell, Walter G; Gormaley, Anne K; Prida, David A

    2017-12-01

    Detailed characterization of changes in vessel size is crucial for the diagnosis and management of a variety of vascular diseases. Because clinical measurement of vessel size is typically dependent on the radiologist's subjective interpretation of the vessel borders, it is often prone to high inter- and intra-user variability. Automatic methods of vessel sizing have been developed for two-dimensional images but a fully three-dimensional (3D) method suitable for vessel sizing from volumetric X-ray computed tomography (CT) or magnetic resonance imaging has heretofore not been demonstrated and validated robustly. In this paper, we refined and objectively validated Gatortail, a method that creates a mathematical geometric 3D model of each branch in a vascular tree, simulates the appearance of the virtual vascular tree in a 3D CT image, and uses the similarity of the simulated image to a patient's CT scan to drive the optimization of the model parameters, including vessel size, to match that of the patient. The method was validated with a 2-dimensional virtual tree structure under deformation, and with a realistic 3D-printed vascular phantom in which the diameter of 64 branches were manually measured 3 times each. The phantom was then scanned on a conventional clinical CT imaging system and the images processed with the in-house software to automatically segment and mathematically model the vascular tree, label each branch, and perform the Gatortail optimization of branch size and trajectory. Previously proposed methods of vessel sizing using matched Gaussian filters and tubularity metrics were also tested. The Gatortail method was then demonstrated on the pulmonary arterial tree segmented from a human volunteer's CT scan. The standard deviation of the difference between the manually measured and Gatortail-based radii in the 3D physical phantom was 0.074 mm (0.087 in-plane pixel units for image voxels of dimension 0.85 × 0.85 × 1.0 mm) over the 64 branches, representing vessel diameters ranging from 1.2 to 7 mm. The linear regression fit gave a slope of 1.056 and an R 2 value of 0.989. These three metrics reflect superior agreement of the radii estimates relative to previously published results over all sizes tested. Sizing via matched Gaussian filters resulted in size underestimates of >33% over all three test vessels, while the tubularity-metric matching exhibited a sizing uncertainty of >50%. In the human chest CT data set, the vessel voxel intensity profiles with and without branch model optimization showed excellent agreement and improvement in the objective measure of image similarity. Gatortail has been demonstrated to be an automated, objective, accurate and robust method for sizing of vessels in 3D non-invasively from chest CT scans. We anticipate that Gatortail, an image-based approach to automatically compute estimates of blood vessel radii and trajectories from 3D medical images, will facilitate future quantitative evaluation of vascular response to disease and environmental insult and improve understanding of the biological mechanisms underlying vascular disease processes. © 2017 American Association of Physicists in Medicine.

  15. Implementation of Particle Swarm Optimization Method for Voltage Stability Analysis in 150 kV Sub System Grati – Paiton East Java

    NASA Astrophysics Data System (ADS)

    Kusumaningtyas, A. B.; Hidayat, M. N.; Ronilaya, F.

    2018-04-01

    Based on the data from State Electric Company on 15 January 2013, the undistributed power in the 150 kV sub system Grati-Paiton Region IV, that consist of 26 bus 150 kV and 2 bus generation 500 kV system, was recorded 3.286,00 MW. At the same time, the frequency of the system was down to 49 Hz. This lead to a deficit generation and unstable voltage condition in the system. Fast Voltage Stability Index (FVSI) method is used in this research to analyze the voltage stability of the buses. For buses with unstable voltage condition, reactive power will be injected through capacitor installation. The site where the capacitor will be installed is determined using the Fast Voltage Stability Index (FVSI) method while the size of the capacitor is determined using the Particle Swarm Optimization (PSO) method. The PSO method has been applied in some researches, such as to determine optimal placement and sizing in radial distribution network as well as in transmission network.. In this research, the PSO method is used to find the Qloss of an interconnection transmission system, which in turn, the value of the Qloss is used to determine the capacitance of the capacitor needed by the system.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batchelor, D.B.; Carreras, B.A.; Hirshman, S.P.

    Significant progress has been made in the development of new modest-size compact stellarator devices that could test optimization principles for the design of a more attractive reactor. These are 3 and 4 field period low-aspect-ratio quasi-omnigenous (QO) stellarators based on an optimization method that targets improved confinement, stability, ease of coil design, low-aspect-ratio, and low bootstrap current.

  17. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  18. Stand-alone hybrid wind-photovoltaic power generation systems optimal sizing

    NASA Astrophysics Data System (ADS)

    Crǎciunescu, Aurelian; Popescu, Claudia; Popescu, Mihai; Florea, Leonard Marin

    2013-10-01

    Wind and photovoltaic energy resources have attracted energy sectors to generate power on a large scale. A drawback, common to these options, is their unpredictable nature and dependence on day time and meteorological conditions. Fortunately, the problems caused by the variable nature of these resources can be partially overcome by integrating the two resources in proper combination, using the strengths of one source to overcome the weakness of the other. The hybrid systems that combine wind and solar generating units with battery backup can attenuate their individual fluctuations and can match with the power requirements of the beneficiaries. In order to efficiently and economically utilize the hybrid energy system, one optimum match design sizing method is necessary. In this way, literature offers a variety of methods for multi-objective optimal designing of hybrid wind/photovoltaic (WG/PV) generating systems, one of the last being genetic algorithms (GA) and particle swarm optimization (PSO). In this paper, mathematical models of hybrid WG/PV components and a short description of the last proposed multi-objective optimization algorithms are given.

  19. Ensemble Learning Method for Hidden Markov Models

    DTIC Science & Technology

    2014-12-01

    Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we

  20. SEEK: A FORTRAN optimization program using a feasible directions gradient search

    NASA Technical Reports Server (NTRS)

    Savage, M.

    1995-01-01

    This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.

  1. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  2. Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers

    NASA Astrophysics Data System (ADS)

    Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard

    2018-03-01

    In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.

  3. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  4. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  5. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  6. Fast optimization of binary clusters using a novel dynamic lattice searching method.

    PubMed

    Wu, Xia; Cheng, Wen

    2014-09-28

    Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd)79 clusters with DFT-fit parameters of Gupta potential.

  7. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  8. [Optimization of Formulation and Process of Paclitaxel PEGylated Liposomes by Box-Behnken Response Surface Methodology].

    PubMed

    Shi, Ya-jun; Zhang, Xiao-feil; Guo, Qiu-ting

    2015-12-01

    To develop a procedure for preparing paclitaxel encapsulated PEGylated liposomes. The membrane hydration followed extraction method was used to prepare PEGylated liposomes. The process and formulation variables were optimized by "Box-Behnken Design (BBD)" of response surface methodology (RSM) with the amount of Soya phosphotidylcholine (SPC) and PEG2000-DSPE as well as the rate of SPC to drug as independent variables and entrapment efficiency as dependent variables for optimization of formulation variables while temperature, pressure and cycle times as independent variables and particle size and polydispersion index as dependent variables for process variables. The optimized liposomal formulation was characterized for particle size, Zeta potential, morphology and in vitro drug release. For entrapment efficiency, particle size, polydispersion index, Zeta potential, and in vitro drug release of PEGylated liposomes was found to be 80.3%, (97.15 ± 14.9) nm, 0.117 ± 0.019, (-30.3 ± 3.7) mV, and 37.4% in 24 h, respectively. The liposomes were found to be small, unilamellar and spherical with smooth surface as seen in transmission electron microscopy. The Box-Behnken response surface methodology facilitates the formulation and optimization of paclitaxel PEGylated liposomes.

  9. Layout optimization using the homogenization method

    NASA Technical Reports Server (NTRS)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  10. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  11. Application of Box-Behnken design to prepare gentamicin-loaded calcium carbonate nanoparticles.

    PubMed

    Maleki Dizaj, Solmaz; Lotfipour, Farzaneh; Barzegar-Jalali, Mohammad; Zarrintan, Mohammad-Hossein; Adibkia, Khosro

    2016-09-01

    The aim of this research was to prepare and optimize calcium carbonate (CaCO3) nanoparticles as carriers for gentamicin sulfate. A chemical precipitation method was used to prepare the gentamicin sulfate-loaded CaCO3 nanoparticles. A 3-factor, 3-level Box-Behnken design was used for the optimization procedure, with the molar ratio of CaCl2: Na2CO3 (X1), the concentration of drug (X2), and the speed of homogenization (X3) as the independent variables. The particle size and entrapment efficiency were considered as response variables. Mathematical equations and response surface plots were used, along with the counter plots, to relate the dependent and independent variables. The results indicated that the speed of homogenization was the main variable contributing to particle size and entrapment efficiency. The combined effect of all three independent variables was also evaluated. Using the response optimization design, the optimized Xl-X3 levels were predicted. An optimized formulation was then prepared according to these levels, resulting in a particle size of 80.23 nm and an entrapment efficiency of 30.80%. It was concluded that the chemical precipitation technique, together with the Box-Behnken experimental design methodology, could be successfully used to optimize the formulation of drug-incorporated calcium carbonate nanoparticles.

  12. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  14. Near Hartree-Fock quality GTO basis sets for the second-row atoms

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1987-01-01

    Energy optimized, near Hartree-Fock quality Gaussian basis sets ranging in size from (17s12p) to (20s15p) are presented for the ground states of the second-row atoms for Na(2P), Na(+), Na(-), Mg(3P), P(-), S(-), and Cl(-). In addition, optimized supplementary functions are given for the ground state basis sets to describe the negative ions, and the excited Na(2P) and Mg(3P) atomic states. The ratios of successive orbital exponents describing the inner part of the 1s and 2p orbitals are found to be nearly independent of both nuclear charge and basis set size. This provides a method of obtaining good starting estimates for other basis set optimizations.

  15. Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method

    NASA Astrophysics Data System (ADS)

    Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee

    2013-03-01

    Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.

  16. Shape Optimization of Supersonic Turbines Using Response Surface and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Shyy, Wei; Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. A global optimization framework combining the radial basis neural network (RBNN) and the polynomial-based response surface method (RSM) is constructed for shape optimization of a supersonic turbine. Based on the optimized preliminary design, shape optimization is performed for the first vane and blade of a 2-stage supersonic turbine, involving O(10) design variables. The design of experiment approach is adopted to reduce the data size needed by the optimization task. It is demonstrated that a major merit of the global optimization approach is that it enables one to adaptively revise the design space to perform multiple optimization cycles. This benefit is realized when an optimal design approaches the boundary of a pre-defined design space. Furthermore, by inspecting the influence of each design variable, one can also gain insight into the existence of multiple design choices and select the optimum design based on other factors such as stress and materials considerations.

  17. A structural design decomposition method utilizing substructuring

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1994-01-01

    A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.

  18. Research on droplet size measurement of impulse antiriots water cannon based on sheet laser

    NASA Astrophysics Data System (ADS)

    Fa-dong, Zhao; Hong-wei, Zhuang; Ren-jun, Zhan

    2014-04-01

    As a new-style counter-personnel non-lethal weapon, it is the non-steady characteristic and large water mist field that increase the difficulty of measuring the droplet size distribution of impulse anti-riots water cannon which is the most important index to examine its tactical and technology performance. A method based on the technologies of particle scattering, sheet laser imaging and high speed handling was proposed and an universal droplet size measuring algorithm was designed and verified. According to this method, the droplet size distribution was measured. The measuring results of the size distribution under the same position with different timescale, the same axial distance with different radial distance, the same radial distance with different axial distance were analyzed qualitatively and some rational cause was presented. The droplet size measuring method proposed in this article provides a scientific and effective experiment method to ascertain the technical and tactical performance and optimize the relative system performance.

  19. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    PubMed Central

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and  P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557

  1. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    PubMed

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  2. Elimination of Bimodal Size in InAs/GaAs Quantum Dots for Preparation of 1.3-μm Quantum Dot Lasers

    NASA Astrophysics Data System (ADS)

    Su, Xiang-Bin; Ding, Ying; Ma, Ben; Zhang, Ke-Lu; Chen, Ze-Sheng; Li, Jing-Lun; Cui, Xiao-Ran; Xu, Ying-Qiang; Ni, Hai-Qiao; Niu, Zhi-Chuan

    2018-02-01

    The device characteristics of semiconductor quantum dot lasers have been improved with progress in active layer structures. Self-assembly formed InAs quantum dots grown on GaAs had been intensively promoted in order to achieve quantum dot lasers with superior device performances. In the process of growing high-density InAs/GaAs quantum dots, bimodal size occurs due to large mismatch and other factors. The bimodal size in the InAs/GaAs quantum dot system is eliminated by the method of high-temperature annealing and optimized the in situ annealing temperature. The annealing temperature is taken as the key optimization parameters, and the optimal annealing temperature of 680 °C was obtained. In this process, quantum dot growth temperature, InAs deposition, and arsenic (As) pressure are optimized to improve quantum dot quality and emission wavelength. A 1.3-μm high-performance F-P quantum dot laser with a threshold current density of 110 A/cm2 was demonstrated.

  3. Elimination of Bimodal Size in InAs/GaAs Quantum Dots for Preparation of 1.3-μm Quantum Dot Lasers.

    PubMed

    Su, Xiang-Bin; Ding, Ying; Ma, Ben; Zhang, Ke-Lu; Chen, Ze-Sheng; Li, Jing-Lun; Cui, Xiao-Ran; Xu, Ying-Qiang; Ni, Hai-Qiao; Niu, Zhi-Chuan

    2018-02-21

    The device characteristics of semiconductor quantum dot lasers have been improved with progress in active layer structures. Self-assembly formed InAs quantum dots grown on GaAs had been intensively promoted in order to achieve quantum dot lasers with superior device performances. In the process of growing high-density InAs/GaAs quantum dots, bimodal size occurs due to large mismatch and other factors. The bimodal size in the InAs/GaAs quantum dot system is eliminated by the method of high-temperature annealing and optimized the in situ annealing temperature. The annealing temperature is taken as the key optimization parameters, and the optimal annealing temperature of 680 °C was obtained. In this process, quantum dot growth temperature, InAs deposition, and arsenic (As) pressure are optimized to improve quantum dot quality and emission wavelength. A 1.3-μm high-performance F-P quantum dot laser with a threshold current density of 110 A/cm 2 was demonstrated.

  4. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  5. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  6. Optimal methods for measuring eligibility for liver transplant in hepatocellular carcinoma patients undergoing transarterial chemoembolization.

    PubMed

    Kim, Hyung-Don; Shim, Ju Hyun; Kim, Gi-Ae; Shin, Yong Moon; Yu, Eunsil; Lee, Sung-Gyu; Lee, Danbi; Kim, Kang Mo; Lim, Young-Suk; Lee, Han Chu; Chung, Young-Hwa; Lee, Yung Sang

    2015-05-01

    We investigated the optimal radiologic method for measuring hepatocellular carcinoma (HCC) treated by transarterial chemoembolization (TACE) in order to assess suitability for liver transplantation (LT). 271 HCC patients undergoing TACE prior to LT were classified according to both Milan and up-to-seven criteria after TACE by using the enhancement or size method on computed tomography images. The cumulative incidence function curves with competing risks regression was used in post-LT time-to-recurrence analysis. The predictive accuracy for recurrence was compared using area under the time-dependent receiver operating characteristic curves (AUC) estimation. Of the 271 patients, 246 (90.8%) and 164 (60.5%) fell within Milan and 252 (93.0%) and 210 (77.5%) fell within up-to-seven criteria, when assessed by enhancement and size methods, respectively. Competing risks regression analyses adjusting for covariates indicated that meeting the criteria by enhancement and by size methods was independently related to post-LT time-to-recurrence in the Milan or up-to-seven model. Higher AUC values were observed with the size method only in the up-to-seven model (p<0.05). Mean differences in the sum of tumor diameter and number of tumors between pathologic and radiologic findings were significantly less by the enhancement method (p<0.05). Cumulative incidence curves showed similar recurrence results between patients with and without prior TACE within the criteria based on either method, except for the within up-to-seven by the enhancement method (p=0.017). The enhancement method is a reliable tool for assessing the control or downstaging of HCC within Milan after TACE, although the size method may be preferable when applying the up-to-seven criterion. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  7. AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS

    PubMed Central

    Mirkovic, Dragan; Pettitt, B. Montgomery; Johnsson, S. Lennart

    2009-01-01

    The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines. PMID:19763233

  8. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less

  9. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  10. Application of rotatable central composite design in the preparation and optimization of poly(lactic-co-glycolic acid) nanoparticles for controlled delivery of paclitaxel.

    PubMed

    Kollipara, Sivacharan; Bende, Girish; Movva, Snehalatha; Saha, Ranendra

    2010-11-01

    Polymeric carrier systems of paclitaxel (PCT) offer advantages over only available formulation Taxol® in terms of enhancing therapeutic efficacy and eliminating adverse effects. The objective of the present study was to prepare poly (lactic-co-glycolic acid) nanoparticles containing PCT using emulsion solvent evaporation technique. Critical factors involved in the processing method were identified and optimized by scientific, efficient rotatable central composite design aiming at low mean particle size and high entrapment efficiency. Twenty different experiments were designed and each formulation was evaluated for mean particle size and entrapment efficiency. The optimized formulation was evaluated for in vitro drug release, and absorption characteristics were studied using in situ rat intestinal permeability study. Amount of polymer and duration of ultrasonication were found to have significant effect on mean particle size and entrapment efficiency. First-order interactions of amount of miglyol with amount of polymer were significant in case of mean particle size, whereas second-order interactions of polymer were significant in mean particle size and entrapment efficiency. The developed quadratic model showed high correlation (R(2) > 0.85) between predicted response and studied factors. The optimized formulation had low mean particle size (231.68 nm) and high entrapment efficiency (95.18%) with 4.88% drug content. The optimized formulation showed controlled release of PCT for more than 72 hours. In situ absorption study showed faster and enhanced extent of absorption of PCT from nanoparticles compared to pure drug. The poly (lactic-co-glycolic acid) nanoparticles containing PCT may be of clinical importance in enhancing its oral bioavailability.

  11. Simulations of Scatterometry Down to 22 nm Structure Sizes and Beyond with Special Emphasis on LER

    NASA Astrophysics Data System (ADS)

    Osten, W.; Ferreras Paz, V.; Frenner, K.; Schuster, T.; Bloess, H.

    2009-09-01

    In recent years, scatterometry has become one of the most commonly used methods for CD metrology. With decreasing structure size for future technology nodes, the search for optimized scatterometry measurement configurations gets more important to exploit maximum sensitivity. As widespread industrial scatterometry tools mainly still use a pre-set measurement configuration, there are still free parameters to improve sensitivity. Our current work uses a simulation based approach to predict and optimize sensitivity of future technology nodes. Since line edge roughness is getting important for such small structures, these imperfections of the periodic continuation cannot be neglected. Using fourier methods like e.g. rigorous coupled wave approach (RCWA) for diffraction calculus, nonperiodic features are hard to reach. We show that in this field certain types of fieldstitching methods show nice numerical behaviour and lead to useful results.

  12. Monte Carlo Study on Carbon-Gradient-Doped Silica Aerogel Insulation.

    PubMed

    Zhao, Y; Tang, G H

    2015-04-01

    Silica aerogel is almost transparent for wavelengths below 8 µm where significant energy is transferred by thermal radiation. The radiative heat transfer can be restricted at high temperature if doped with carbon powder in silica aerogel. However, different particle sizes of carbon powder doping have different spectral extinction coefficients and the doped carbon powder will increase the solid conduction of silica aerogel. This paper presents a theoretical method for determining the optimal carbon doping in silica aerogel to minimize the energy transfer. Firstly we determine the optimal particle size by combining the spectral extinction coefficient with blackbody radiation and then evaluate the optimal doping amount between heat conduction and radiation. Secondly we develop the Monte Carlo numerical method to study radiative properties of carbon-gradient-doped silica aerogel to decrease the radiative heat transfer further. The results indicate that the carbon powder is able to block infrared radiation and thus improve the thermal insulating performance of silica aerogel effectively.

  13. Structural optimization for joined-wing synthesis

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Kroo, Ilan M.

    1992-01-01

    The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.

  14. Is patient size important in dose determination and optimization in cardiology?

    NASA Astrophysics Data System (ADS)

    Reay, J.; Chapple, C. L.; Kotre, C. J.

    2003-12-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization.

  15. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  16. Enhancement of anti-inflammatory activity of bromelain by its encapsulation in katira gum nanoparticles.

    PubMed

    Bernela, Manju; Ahuja, Munish; Thakur, Rajesh

    2016-06-05

    Bromelain-loaded katira gum nanoparticles were synthesized using 3 level optimization process and desirability approach. Nanoparticles of the optimized batch were characterized using particle size analysis, zeta potential, transmission electron microscopy and Fourier-transform infrared spectroscopy. Investigation of their in vivo anti-inflammatory activity by employing carrageenan induced rat-paw oedema method showed that encapsulation of bromelain in katira gum nanoparticles substantially enhanced its anti-inflammatory potential. This may be attributed to enhanced absorption owing to reduced particle size or to protection of bromelain from acid proteases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Method of automatic measurement and focus of an electron beam and apparatus therefore

    DOEpatents

    Giedt, W.H.; Campiotti, R.

    1996-01-09

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding is disclosed. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined. 12 figs.

  18. Method of automatic measurement and focus of an electron beam and apparatus therefor

    DOEpatents

    Giedt, Warren H.; Campiotti, Richard

    1996-01-01

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined.

  19. A finite difference Davidson procedure to sidestep full ab initio hessian calculation: Application to characterization of stationary points and transition state searches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharada, Shaama Mallikarjun; Bell, Alexis T., E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu; Head-Gordon, Martin, E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu

    2014-04-28

    The cost of calculating nuclear hessians, either analytically or by finite difference methods, during the course of quantum chemical analyses can be prohibitive for systems containing hundreds of atoms. In many applications, though, only a few eigenvalues and eigenvectors, and not the full hessian, are required. For instance, the lowest one or two eigenvalues of the full hessian are sufficient to characterize a stationary point as a minimum or a transition state (TS), respectively. We describe here a method that can eliminate the need for hessian calculations for both the characterization of stationary points as well as searches for saddlemore » points. A finite differences implementation of the Davidson method that uses only first derivatives of the energy to calculate the lowest eigenvalues and eigenvectors of the hessian is discussed. This method can be implemented in conjunction with geometry optimization methods such as partitioned-rational function optimization (P-RFO) to characterize stationary points on the potential energy surface. With equal ease, it can be combined with interpolation methods that determine TS guess structures, such as the freezing string method, to generate approximate hessian matrices in lieu of full hessians as input to P-RFO for TS optimization. This approach is shown to achieve significant cost savings relative to exact hessian calculation when applied to both stationary point characterization as well as TS optimization. The basic reason is that the present approach scales one power of system size lower since the rate of convergence is approximately independent of the size of the system. Therefore, the finite-difference Davidson method is a viable alternative to full hessian calculation for stationary point characterization and TS search particularly when analytical hessians are not available or require substantial computational effort.« less

  20. Automated sizing of large structures by mixed optimization methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.; Loendorf, D.

    1973-01-01

    A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.

  1. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  2. Optimized Periocular Template Selection for Human Recognition

    PubMed Central

    Sa, Pankaj K.; Majhi, Banshidhar

    2013-01-01

    A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation. PMID:23984370

  3. Preparation and Optimization of Vanadium Titanomagnetite Carbon Composite Hot Briquette: A New Type of Blast Furnace Burden

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Wang, H. T.; Liu, Z. G.; Chu, M. S.; Ying, Z. W.; Tang, J.

    2017-10-01

    A new type of blast furnace burden, named VTM-CCB (vanadium titanomagnetite carbon composite hot briquette), is proposed and optimized in this paper. The preparation process of VTM-CCB includes two components, hot briquetting and heat treatment. The hot-briquetting and heat-treatment parameters are systematically optimized based on the Taguchi method and single-factor experiment. The optimized preparation parameters of VTM-CCB include a hot-briquetting temperature of 300°C, a coal particle size of <0.075 mm, a vanadium titanomagnetite particle size of <0.075 mm, a coal-added ratio of 28.52%, a heat-treatment temperature of 500°C and a heat-treatment time of 3 h. The compressive strength of VTM-CCB, based on the optimized parameters, reaches 2450 N, which meets the requirement of blast furnace ironmaking. These integrated parameters provide a theoretical basis for the production and application of a blast furnace smelting VTM-CCB.

  4. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  5. Multi-Parameter Scattering Sensor and Methods

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S. (Inventor); Fischer, David G. (Inventor)

    2016-01-01

    Methods, detectors and systems detect particles and/or measure particle properties. According to one embodiment, a detector for detecting particles comprises: a sensor for receiving radiation scattered by an ensemble of particles; and a processor for determining a physical parameter for the detector, or an optimal detection angle or a bound for an optimal detection angle, for measuring at least one moment or integrated moment of the ensemble of particles, the physical parameter, or detection angle, or detection angle bound being determined based on one or more of properties (a) and/or (b) and/or (c) and/or (d) or ranges for one or more of properties (a) and/or (b) and/or (c) and/or (d), wherein (a)-(d) are the following: (a) is a wavelength of light incident on the particles, (b) is a count median diameter or other characteristic size parameter of the particle size distribution, (c) is a standard deviation or other characteristic width parameter of the particle size distribution, and (d) is a refractive index of particles.

  6. Influence of the weighing bar size to determine optimal time of biodiesel-glycerol separation by using the buoyancy weighing-bar method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Sibagariang, Y.; Manurung, J.

    2018-02-01

    The buoyancy weighing-bar method is a novel method in the particle size distribution measurement. This method can measure particle size distributions of the settling particles and floating particles. In this study, the buoyancy weighing-bar method is applied to determine optimal time of biodiesel-glycerol separation. The buoyancy weighing-bar method can be applied to determine the separation time because biodiesel and glycerol have the different densities. The influences of diameter of weighing-bar by using the buoyancy weighing-bar method would be experimentally investigated. The diameters of weighing-bar in this experiment are 8 mm, 10 mm, 15 mm and 20 mm, while the graduated cylinder (diameter : 65 mm) is used as vessel. The samples used in this experiment are the mixture of 95 % of biodiesel and 5 % of glycerol. The data obtained by the buoyancy weighing-bar method are analized by using the gas chromatography to determine the purity of biodiesel. Based on the data obtained, the buoyancy weighing-bar method can be used to detect the separation time of biodiesel-glycerol by using the weighing-bar diameter of 8 mm, 10 mm, 15 mm and 20 mm, but the most accuracy in determination the biodiesel-glycerol separation time is obtained by using the weighing-bar diameter of 20 mm. The biodiesel purity of 97.97 % could be detected at 64 minutes by using the buoyancy weighing-bar method when the weighing-bar diameter of 20 mm is used.

  7. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1983-01-01

    New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.

  8. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  9. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  10. An efficiency study of the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw

    1995-01-01

    The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.

  11. Optimization of pencil beam f-theta lens for high-accuracy metrology

    NASA Astrophysics Data System (ADS)

    Peng, Chuanqian; He, Yumei; Wang, Jie

    2018-01-01

    Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.

  12. Optimization and characterization of liposome formulation by mixture design.

    PubMed

    Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel

    2012-02-07

    This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.

  13. A method to incorporate leakage and head scatter corrections into a tomotherapy inverse treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Holmes, Timothy W.

    2001-01-01

    A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.

  14. Optimization and evaluation of gastroretentive ranitidine HCl microspheres by using design expert software.

    PubMed

    Hooda, Aashima; Nanda, Arun; Jain, Manish; Kumar, Vikash; Rathee, Permender

    2012-12-01

    The current study involves the development and optimization of their drug entrapment and ex vivo bioadhesion of multiunit chitosan based floating system containing Ranitidine HCl by ionotropic gelation method for gastroretentive delivery. Chitosan being cationic, non-toxic, biocompatible, biodegradable and bioadhesive is frequently used as a material for drug delivery systems and used to transport a drug to an acidic environment where it enhances the transport of polar drugs across epithelial surfaces. The effect of various process variables like drug polymer ratio, concentration of sodium tripolyphosphate and stirring speed on various physiochemical properties like drug entrapment efficiency, particle size and bioadhesion was optimized using central composite design and analyzed using response surface methodology. The observed responses were coincided well with the predicted values given by the optimization technique. The optimized microspheres showed drug entrapment efficiency of 74.73%, particle size 707.26 μm and bioadhesion 71.68% in simulated gastric fluid (pH 1.2) after 8 h with floating lag time 40s. The average size of all the dried microspheres ranged from 608.24 to 720.80 μm. The drug entrapment efficiency of microspheres ranged from 41.67% to 87.58% and bioadhesion ranged from 62% to 86%. Accelerated stability study was performed on optimized formulation as per ICH guidelines and no significant change was found in drug content on storage. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Shi, F; Jia, X

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires accessmore » to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.« less

  16. Enhanced oral bioavailability of silymarin using liposomes containing a bile salt: preparation by supercritical fluid technology and evaluation in vitro and in vivo

    PubMed Central

    Yang, Gang; Zhao, Yaping; Zhang, Yongtai; Dang, Beilei; Liu, Ying; Feng, Nianping

    2015-01-01

    The aim of this investigation was to develop a procedure to improve the dissolution and bioavailability of silymarin (SM) by using bile salt-containing liposomes that were prepared by supercritical fluid technology (ie, solution-enhanced dispersion by supercritical fluids [SEDS]). The process for the preparation of SM-loaded liposomes containing a bile salt (SM-Lip-SEDS) was optimized using a central composite design of response surface methodology with the ratio of SM to phospholipids (w/w), flow rate of solution (mL/min), and pressure (MPa) as independent variables. Particle size, entrapment efficiency (EE), and drug loading (DL) were dependent variables for optimization of the process and formulation variables. The particle size, zeta potential, EE, and DL of the optimized SM-Lip-SEDS were 160.5 nm, −62.3 mV, 91.4%, and 4.73%, respectively. Two other methods to produce SM liposomes were compared to the SEDS method. The liposomes obtained by the SEDS method exhibited the highest EE and DL, smallest particle size, and best stability compared to liposomes produced by the thin-film dispersion and reversed-phase evaporation methods. Compared to the SM powder, SM-Lip-SEDS showed increased in vitro drug release. The in vivo AUC0−t of SM-Lip-SEDS was 4.8-fold higher than that of the SM powder. These results illustrate that liposomes containing a bile salt can be used to enhance the oral bioavailability of SM and that supercritical fluid technology is suitable for the preparation of liposomes. PMID:26543366

  17. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  18. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  19. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  20. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumormore » target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with T{sub d} less than 10 days, there was no significant increase in tumor BED but the treatment course could be shortened without a loss in tumor BED. The improvement in the tumor mean BED was more pronounced with smaller tumors (p-value = 0.08). Conclusions: Spatiotemporal optimization of patient plans has the potential to significantly improve local tumor control (larger BED/EUD) of patients with a favorable geometry, such as smaller tumors with larger distances between the tumor target and nearby OAR. In patients with a less favorable geometry and for fast growing tumors, plans optimized using spatiotemporal optimization and conventional (spatial-only) optimization are equivalent (negligible differences in tumor BED/EUD). However, spatiotemporal optimization yields shorter treatment courses than conventional spatial-only optimization. Personalized, spatiotemporal optimization of treatment schedules can increase patient convenience and help with the efficient allocation of clinical resources. Spatiotemporal optimization can also help identify a subset of patients that might benefit from nonconventional (large dose per fraction) treatments that are ineligible for the current practice of stereotactic body radiation therapy.« less

  1. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396

  2. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm.

    PubMed

    Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.

  3. Parameter Optimization for Turbulent Reacting Flows Using Adjoints

    NASA Astrophysics Data System (ADS)

    Lapointe, Caelan; Hamlington, Peter E.

    2017-11-01

    The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.

  4. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  5. Enhancement of 2,3-Butanediol Production by Klebsiella oxytoca PTCC 1402

    PubMed Central

    Anvari, Maesomeh; Safari Motlagh, Mohammad Reza

    2011-01-01

    Optimal operating parameters of 2,3-Butanediol production using Klebsiella oxytoca under submerged culture conditions are determined by using Taguchi method. The effect of different factors including medium composition, pH, temperature, mixing intensity, and inoculum size on 2,3-butanediol production was analyzed using the Taguchi method in three levels. Based on these analyses the optimum concentrations of glucose, acetic acid, and succinic acid were found to be 6, 0.5, and 1.0 (% w/v), respectively. Furthermore, optimum values for temperature, inoculum size, pH, and the shaking speed were determined as 37°C, 8 (g/L), 6.1, and 150 rpm, respectively. The optimal combinations of factors obtained from the proposed DOE methodology was further validated by conducting fermentation experiments and the obtained results revealed an enhanced 2,3-Butanediol yield of 44%. PMID:21318172

  6. Optimization of extraction parameters of pentacyclic triterpenoids from Swertia chirata stem using response surface methodology.

    PubMed

    Pandey, Devendra Kumar; Kaur, Prabhjot

    2018-03-01

    In the present investigation, pentacyclic triterpenoids were extracted from different parts of Swertia chirata by solid-liquid reflux extraction methods. The total pentacyclic triterpenoids (UA, OA, and BA) in extracted samples were determined by HPTLC method. Preliminary studies showed that stem part contains the maximum pentacyclic triterpenoid and was chosen for further studies. Response surface methodology (RSM) has been employed successfully by solid-liquid reflux extraction methods for the optimization of different extraction variables viz., temperature ( X 1 35-70 °C), extraction time ( X 2 30-60 min), solvent composition ( X 3 20-80%), solvent-to-solid ratio ( X 4 30-60 mlg -1 ), and particle size ( X 5 3-6 mm) on maximum recovery of triterpenoid from stem parts of Swertia chirata . A Plackett-Burman design has been used initially to screen out the three extraction factors viz., particle size, temperature, and solvent composition on yield of triterpenoid. Moreover, central composite design (CCD) was implemented to optimize the significant extraction parameters for maximum triterpenoid yield. Three extraction parameters viz., mean particle size (3 mm), temperature (65 °C), and methanol-ethyl acetate solvent composition (45%) can be considered as significant for the better yield of triterpenoid A second-order polynomial model satisfactorily fitted the experimental data with the R 2 values of 0.98 for the triterpenoid yield ( p  < 0.001), implying good agreement between the experimental triterpenoid yield (3.71%) to the predicted value (3.79%).

  7. Asymmetric flow field flow fractionation for the characterization of globule size distribution in complex formulations: A cyclosporine ophthalmic emulsion case.

    PubMed

    Qu, Haiou; Wang, Jiang; Wu, Yong; Zheng, Jiwen; Krishnaiah, Yellela S R; Absar, Mohammad; Choi, Stephanie; Ashraf, Muhammad; Cruz, Celia N; Xu, Xiaoming

    2018-03-01

    Commonly used characterization techniques such as cryogenic-transmission electron microscopy (cryo-TEM) and batch-mode dynamic light scattering (DLS) are either time consuming or unable to offer high resolution to discern the poly-dispersity of complex drug products like cyclosporine ophthalmic emulsions. Here, a size-based separation and characterization method for globule size distribution using an asymmetric flow field flow fractionation (AF4) is reported for comparative assessment of cyclosporine ophthalmic emulsion drug products (model formulation) with a wide size span and poly-dispersity. Cyclosporine emulsion formulations that are qualitatively (Q1) and quantitatively (Q2) the same as Restasis® were prepared in house with varying manufacturing processes and analyzed using the optimized AF4 method. Based on our results, the commercially available cyclosporine ophthalmic emulsion has a globule size span from 30 nm to a few hundred nanometers with majority smaller than 100 nm. The results with in-house formulations demonstrated the sensitivity of AF4 in determining the differences in the globule size distribution caused by the changes to the manufacturing process. It is concluded that the optimized AF4 is a potential analytical technique for comprehensive understanding of the microstructure and assessment of complex emulsion drug products with high poly-dispersity. Published by Elsevier B.V.

  8. The effects of relative food item size on optimal tooth cusp sharpness during brittle food item processing

    PubMed Central

    Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.

    2014-01-01

    Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068

  9. Sampling bee communities using pan traps: alternative methods increase sample size

    USDA-ARS?s Scientific Manuscript database

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  10. Weight optimization of an aerobrake structural concept for a lunar transfer vehicle

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.; Unal, Resit; Rowell, Lawrence F.; Rehder, John J.

    1992-01-01

    An aerobrake structural concept for a lunar transfer vehicle was weight optimized through the use of the Taguchi design method, finite element analyses, and element sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter-depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The aerobrake structural configuration with the minimum weight was 44 percent less than the average weight of all the remaining satisfactory experimental configurations. In addition, the results of this study have served to bolster the advocacy of the Taguchi method for aerospace vehicle design. Both reduced analysis time and an optimized design demonstrated the applicability of the Taguchi method to aerospace vehicle design.

  11. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    PubMed

    Wang, Jiaxi; Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  12. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations

    PubMed Central

    Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. PMID:28704489

  13. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  14. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations.

    PubMed

    Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike

    2017-01-01

    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.

  15. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  16. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  17. Pair 2-electron reduced density matrix theory using localized orbitals

    NASA Astrophysics Data System (ADS)

    Head-Marsden, Kade; Mazziotti, David A.

    2017-08-01

    Full configuration interaction (FCI) restricted to a pairing space yields size-extensive correlation energies but its cost scales exponentially with molecular size. Restricting the variational two-electron reduced-density-matrix (2-RDM) method to represent the same pairing space yields an accurate lower bound to the pair FCI energy at a mean-field-like computational scaling of O (r3) where r is the number of orbitals. In this paper, we show that localized molecular orbitals can be employed to generate an efficient, approximately size-extensive pair 2-RDM method. The use of localized orbitals eliminates the substantial cost of optimizing iteratively the orbitals defining the pairing space without compromising accuracy. In contrast to the localized orbitals, the use of canonical Hartree-Fock molecular orbitals is shown to be both inaccurate and non-size-extensive. The pair 2-RDM has the flexibility to describe the spectra of one-electron RDM occupation numbers from all quantum states that are invariant to time-reversal symmetry. Applications are made to hydrogen chains and their dissociation, n-acene from naphthalene through octacene, and cadmium telluride 2-, 3-, and 4-unit polymers. For the hydrogen chains, the pair 2-RDM method recovers the majority of the energy obtained from similar calculations that iteratively optimize the orbitals. The localized-orbital pair 2-RDM method with its mean-field-like computational scaling and its ability to describe multi-reference correlation has important applications to a range of strongly correlated phenomena in chemistry and physics.

  18. Optimizing the passenger air bag of an adaptive restraint system for multiple size occupants.

    PubMed

    Bai, Zhonghao; Jiang, Binhui; Zhu, Feng; Cao, Libo

    2014-01-01

    The development of the adaptive occupant restraint system (AORS) has led to an innovative way to optimize such systems for multiple size occupants. An AORS consists of multiple units such as adaptive air bags, seat belts, etc. During a collision, as a supplemental protective device, air bags can provide constraint force and play a role in dissipating the crash energy of the occupants' head and thorax. This article presents an investigation into an adaptive passenger air bag (PAB). The purpose of this study is to develop a base shape of a PAB for different size occupants using an optimization method. Four typical base shapes of a PAB were designed based on geometric data on the passenger side. Then 4 PAB finite element (FE) models and a validated sled with different size dummy models were developed in MADYMO (TNO, Rijswijk, The Netherlands) to conduct the optimization to obtain the best baseline PAB that would be used in the AORS. The objective functions-that is, the minimum total probability of injuries (∑Pcomb) of the 5th percentile female and 50th and 95th percentile male dummies-were adopted to evaluate the optimal configurations. The injury probability (Pcomb) for each dummy was adopted from the U.S. New Car Assessment Program (US-NCAP). The parameters of the AORS were first optimized for different types of PAB base shapes in a frontal impact. Then, contact time duration and force between the PAB and dummy head/chest were optimized by adjusting the parameters of the PAB, such as the number and position of tethers, lower the Pcomb of the 95th percentile male dummy. According to the optimization results, 4 typical PABs could provide effective protection to 5th and 50th percentile dummies. However, due to the heavy and large torsos of the 95th percentile occupants, the current occupant restraint system does not demonstrate satisfactory protective function, particularly for the thorax.

  19. Optimization of extraction efficiency by shear emulsifying assisted enzymatic hydrolysis and functional properties of dietary fiber from deoiled cumin (Cuminum cyminum L.).

    PubMed

    Ma, Mengmei; Mu, Taihua; Sun, Hongnan; Zhang, Miao; Chen, Jingwang; Yan, Zhibin

    2015-07-15

    This study evaluated the optimal conditions for extracting dietary fiber (DF) from deoiled cumin by shear emulsifying assisted enzymatic hydrolysis (SEAEH) using the response surface methodology. Fat adsorption capacity (FAC), glucose adsorption capacity (GAC), and bile acid retardation index (BRI) were measured to evaluate the functional properties of the extracted DF. The results revealed that the optimal extraction conditions included an enzyme to substrate ratio of 4.5%, a reaction temperature of 57 °C, a pH value of 7.7, and a reaction time of 155 min. Under these conditions, DF extraction efficiency and total dietary fiber content were 95.12% and 84.18%, respectively. The major components of deoiled cumin DF were hemicellulose (37.25%) and cellulose (33.40%). FAC and GAC increased with decreasing DF particle size (51-100 μm), but decreased with DF particle sizes <26 μm; BRI increased with decreasing DF particle size. The results revealed that SEAEH is an effective method for extracting DF. DF with particle size 26-51 μm had improved functional properties. Copyright © 2015. Published by Elsevier Ltd.

  20. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  1. Optimization of the emulsification and solvent displacement method for the preparation of solid lipid nanoparticles.

    PubMed

    Noriega-Peláez, Eddy Kei; Mendoza-Muñoz, Néstor; Ganem-Quintanar, Adriana; Quintanar-Guerrero, David

    2011-02-01

    The essential aim of this article is to prepare solid lipid nanoparticles (SLNs) by emulsification and solvent displacement method and to determine the best process conditions to obtain submicron particles. The emulsification and solvent displacement method is a modification of the well-known emulsification-diffusion method, but without dilution of the system. The extraction of the partially water-miscible solvent from the emulsion globules is carried out under reduced pressure, which causes the diffusion of the solvent toward the external phase, with subsequent lipid aggregation in particles whose size will depend on the process conditions. The critical variables affecting the process, such as stirring rate, the proportion of phases in the emulsion, and the amount of stabilizer and lipid, were evaluated and optimized. By this method, it was possible to obtain a high yield of solids in the dispersion for the lipids evaluated (Compritol(®) ATO 888, Geleol(®), Gelucire(®) 44/14, and stearic acid). SLNs of up to ∼20 mg/mL were obtained for all lipids evaluated. A marked reduction in size, between 500 and 2500 rpm, was seen, and a transition from micro- to nanometric size was observed. The smaller particle sizes obtained were 113 nm for Compritol(®) ATO 888, 70 nm for Gelucire(®) 44/14, 210 nm for Geleol(®), and 527 nm for stearic acid, using a rotor-stator homogenizer (Ultra-Turrax(®)) at 16,000 rpm. The best phase ratio (organic/aqueous) was 1 : 2. The process proposed in this study is a new alternative to prepare SLNs with technological potential.

  2. Improving Efficiency of Passive RFID Tag Anti-Collision Protocol Using Dynamic Frame Adjustment and Optimal Splitting.

    PubMed

    Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma

    2018-04-12

    Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.

  3. Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system

    NASA Astrophysics Data System (ADS)

    Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU

    2018-03-01

    The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.

  4. Quantum money with nearly optimal error tolerance

    NASA Astrophysics Data System (ADS)

    Amiri, Ryan; Arrazola, Juan Miguel

    2017-06-01

    We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.

  5. Method and Process Development of Advanced Atmospheric Plasma Spraying for Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Mihm, Sebastian; Duda, Thomas; Gruner, Heiko; Thomas, Georg; Dzur, Birger

    2012-06-01

    Over the last few years, global economic growth has triggered a dramatic increase in the demand for resources, resulting in steady rise in prices for energy and raw materials. In the gas turbine manufacturing sector, process optimizations of cost-intensive production steps involve a heightened potential of savings and form the basis for securing future competitive advantages in the market. In this context, the atmospheric plasma spraying (APS) process for thermal barrier coatings (TBC) has been optimized. A constraint for the optimization of the APS coating process is the use of the existing coating equipment. Furthermore, the current coating quality and characteristics must not change so as to avoid new qualification and testing. Using experience in APS and empirically gained data, the process optimization plan included the variation of e.g. the plasma gas composition and flow-rate, the electrical power, the arrangement and angle of the powder injectors in relation to the plasma jet, the grain size distribution of the spray powder and the plasma torch movement procedures such as spray distance, offset and iteration. In particular, plasma properties (enthalpy, velocity and temperature), powder injection conditions (injection point, injection speed, grain size and distribution) and the coating lamination (coating pattern and spraying distance) are examined. The optimized process and resulting coating were compared to the current situation using several diagnostic methods. The improved process significantly reduces costs and achieves the requirement of comparable coating quality. Furthermore, a contribution was made towards better comprehension of the APS of ceramics and the definition of a better method for future process developments.

  6. Development of a magnetic solid-phase extraction coupled with high-performance liquid chromatography method for the analysis of polyaromatic hydrocarbons.

    PubMed

    Ma, Yan; Xie, Jiawen; Jin, Jing; Wang, Wei; Yao, Zhijian; Zhou, Qing; Li, Aimin; Liang, Ying

    2015-07-01

    A novel magnetic solid phase extraction coupled with high-performance liquid chromatography method was established to analyze polyaromatic hydrocarbons in environmental water samples. The extraction conditions, including the amount of extraction agent, extraction time, pH and the surface structure of the magnetic extraction agent, were optimized. The results showed that the amount of extraction agent and extraction time significantly influenced the extraction performance. The increase in the specific surface area, the enlargement of pore size, and the reduction of particle size could enhance the extraction performance of the magnetic microsphere. The optimized magnetic extraction agent possessed a high surface area of 1311 m(2) /g, a large pore size of 6-9 nm, and a small particle size of 6-9 μm. The limit of detection for phenanthrene and benzo[g,h,i]perylene in the developed analysis method was 3.2 and 10.5 ng/L, respectively. When applied to river water samples, the spiked recovery of phenanthrene and benzo[g,h,i]perylene ranged from 89.5-98.6% and 82.9-89.1%, respectively. Phenanthrene was detected over a concentration range of 89-117 ng/L in three water samples withdrawn from the midstream of the Huai River, and benzo[g,h,i]perylene was below the detection limit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Improved Dot Diffusion For Image Halftoning

    DTIC Science & Technology

    1999-01-01

    The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is

  8. Effects of hierarchical structures and insulating liquid media on adhesion

    NASA Astrophysics Data System (ADS)

    Yang, Weixu; Wang, Xiaoli; Li, Hanqing; Song, Xintao

    2017-11-01

    Effects of hierarchical structures and insulating liquid media on adhesion are investigated through a numerical adhesive contact model established in this paper, in which hierarchical structures are considered by introducing the height distribution into the surface gap equation, and media are taken into account through the Hamaker constant in Lifshitz-Hamaker approach. Computational methods such as inexact Newton method, bi-conjugate stabilized (Bi-CGSTAB) method and fast Fourier transform (FFT) technique are employed to obtain the adhesive force. It is shown that hierarchical structured surface exhibits excellent anti-adhesive properties compared with flat, micro or nano structured surfaces. Adhesion force is more dependent on the sizes of nanostructures than those of microstructures, and the optimal ranges of nanostructure pitch and maximum height for small adhesion force are presented. Insulating liquid media effectively decrease the adhesive interaction and 1-bromonaphthalene exhibits the smallest adhesion force among the five selected media. In addition, effects of hierarchical structures with optimal sizes on reducing adhesion are more obvious than those of the selected insulating liquid media.

  9. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  10. Monte Carlo Optimization of Crystal Configuration for Pixelated Molecular SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Mahani, Hojjat; Raisali, Gholamreza; Kamali-Asl, Alireza; Ay, Mohammad Reza

    2017-02-01

    Resolution-sensitivity-PDA tradeoff is the most challenging problem in design and optimization of pixelated preclinical SPECT scanners. In this work, we addressed such a challenge from a crystal point-of-view by looking for an optimal pixelated scintillator using GATE Monte Carlo simulation. Various crystal configurations have been investigated and the influence of different pixel sizes, pixel gaps, and three scintillators on tomographic resolution, sensitivity, and PDA of the camera were evaluated. The crystal configuration was then optimized using two objective functions: the weighted-sum and the figure-of-merit methods. The CsI(Na) reveals the highest sensitivity of the order of 43.47 cps/MBq in comparison to the NaI(Tl) and the YAP(Ce), for a 1.5×1.5 mm2 pixel size and 0.1 mm gap. The results show that the spatial resolution, in terms of FWHM, improves from 3.38 to 2.21 mm while the sensitivity simultaneously deteriorates from 42.39 cps/MBq to 27.81 cps/MBq when pixel size varies from 2×2 mm2 to 0.5×0.5 mm2 for a 0.2 mm gap, respectively. The PDA worsens from 0.91 to 0.42 when pixel size decreases from 0.5×0.5 mm2 to 1×1 mm2 for a 0.2 mm gap at 15° incident-angle. The two objective functions agree that the 1.5×1.5 mm2 pixel size and 0.1 mm Epoxy gap CsI(Na) configuration provides the best compromise for small-animal imaging, using the HiReSPECT scanner. Our study highlights that crystal configuration can significantly affect the performance of the camera, and thereby Monte Carlo optimization of pixelated detectors is mandatory in order to achieve an optimal quality tomogram.

  11. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2011-01-01

    The Mark III suit has multiple sizes of suit components (arm, leg, and gloves) as well as sizing inserts to tailor the fit of the suit to an individual. This study sought to determine a way to identify the point an ideal suit fit transforms into a bad fit and how to quantify this breakdown using mobility-based physical performance data. This study examined the changes in human physical performance via degradation of the elbow and wrist range of motion of the planetary suit prototype (Mark III) with respect to changes in sizing and as well as how to apply that knowledge to suit sizing options and improvements in suit fit. The methods implemented in this study focused on changes in elbow and wrist mobility due to incremental suit sizing modifications. This incremental sizing was within a range that included both optimum and poor fit. Suited range of motion data was collected using a motion analysis system for nine isolated and functional tasks encompassing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm only. The results were then compared across sizing configurations. The results of this study indicate that range of motion may be used as a viable parameter to quantify at what stage suit sizing causes a detriment in performance; however the human performance decrement appeared to be based on the interaction of multiple joints along a limb, not a single joint angle. The study was able to identify a preliminary method to quantify the impact of size on performance and to develop a means to gauge tolerances around optimal size. More work is needed to improve the assessment of optimal fit and to compensate for multiple joint interactions.

  12. SU-F-18C-01: Minimum Detectability Analysis for Comprehensive Sized Based Optimization of Image Quality and Radiation Dose Across CT Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smitherman, C; Chen, B; Samei, E

    2014-06-15

    Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less

  13. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  14. Optimal and Miniaturized Strongly Coupled Magnetic Resonant Systems

    NASA Astrophysics Data System (ADS)

    Hu, Hao

    Wireless power transfer (WPT) technologies for communication and recharging devices have recently attracted significant research attention. Conventional WPT systems based either on far-field or near-field coupling cannot provide simultaneously high efficiency and long transfer range. The Strongly Coupled Magnetic Resonance (SCMR) method was introduced recently, and it offers the possibility of transferring power with high efficiency over longer distances. Previous SCMR research has only focused on how to improve its efficiency and range through different methods. However, the study of optimal and miniaturized designs has been limited. In addition, no multiband and broadband SCMR WPT systems have been developed and traditional SCMR systems exhibit narrowband efficiency thereby imposing strict limitations on simultaneous wireless transmission of information and power, which is important for battery-less sensors. Therefore, new SCMR systems that are optimally designed and miniaturized in size will significantly enhance various technologies in many applications. The optimal and miniaturized SCMR systems are studied here. First, analytical models of the Conformal SCMR (CSCMR) system and thorough analysis and design methodology have been presented. This analysis specifically leads to the identification of the optimal design parameters, and predicts the performance of the designed CSCMR system. Second, optimal multiband and broadband CSCMR systems are designed. Two-band, three-band, and four-band CSCMR systems are designed and validated using simulations and measurements. Novel broadband CSCMR systems are also analyzed, designed, simulated and measured. The proposed broadband CSCMR system achieved more than 7 times larger bandwidth compared to the traditional SCMR system at the same frequency. Miniaturization methods of SCMR systems are also explored. Specifically, methods that use printable CSCMR with large capacitors, novel topologies including meandered, SRRs, and spiral topologies or 3-D structures, lower the operating frequency of SCMR systems, thereby reducing their size. Finally, SCMR systems are discussed and designed for various applications, such as biomedical devices and simultaneous powering of multiple devices.

  15. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  16. Cylindrical geometry hall thruster

    DOEpatents

    Raitses, Yevgeny; Fisch, Nathaniel J.

    2002-01-01

    An apparatus and method for thrusting plasma, utilizing a Hall thruster with a cylindrical geometry, wherein ions are accelerated in substantially the axial direction. The apparatus is suitable for operation at low power. It employs small size thruster components, including a ceramic channel, with the center pole piece of the conventional annular design thruster eliminated or greatly reduced. Efficient operation is accomplished through magnetic fields with a substantial radial component. The propellant gas is ionized at an optimal location in the thruster. A further improvement is accomplished by segmented electrodes, which produce localized voltage drops within the thruster at optimally prescribed locations. The apparatus differs from a conventional Hall thruster, which has an annular geometry, not well suited to scaling to small size, because the small size for an annular design has a great deal of surface area relative to the volume.

  17. Development and optimization of locust bean gum and sodium alginate interpenetrating polymeric network of capecitabine.

    PubMed

    Upadhyay, Mansi; Adena, Sandeep Kumar Reddy; Vardhan, Harsh; Pandey, Sureshwar; Mishra, Brahmeshwar

    2018-03-01

    The objective of the study was to develop interpenetrating polymeric network (IPN) of capecitabine (CAP) using natural polymers locust bean gum (LBG) and sodium alginate (NaAlg). The IPN microbeads were optimized by Box-Behnken Design (BBD) to provide anticipated particle size with good drug entrapment efficiency. The comparative dissolution profile of IPN microbeads of CAP with the marketed preparation proved an excellent sustained drug delivery vehicle. Ionotropic gelation method utilizing metal ion calcium (Ca 2+ ) as a cross-linker was used to prepare IPN microbeads. The optimization study was done by response surface methodology based Box-Behnken Design. The effect of the factors on the responses of optimized batch was exhibited through response surface and contour plots. The optimized batch was analyzed for particle size, % drug entrapment, pharmacokinetic study, in vitro drug release study and further characterized by FTIR, XRD, and SEM. To study the water uptake capacity and hydrodynamic activity of the polymers, swelling studies and viscosity measurement were performed, respectively. The particle size and % drug entrapment of the optimized batch was 494.37 ± 1.4 µm and 81.39 ± 2.9%, respectively, closer to the value predicted by Minitab 17 software. The in vitro drug release study showed sustained release of 92% for 12 h and followed anomalous drug release pattern. The derived pharmacokinetic parameters of optimized batch showed improved results than pure CAP. Thus, the formed IPN microbeads of CAP proved to be an effective extended drug delivery vehicle for the water soluble antineoplastic drug.

  18. Optimizing techniques to capture and extract environmental DNA for detection and quantification of fish.

    PubMed

    Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W

    2016-01-01

    Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies. © 2015 John Wiley & Sons Ltd.

  19. A gold nanoparticle-based immunochromatographic assay: the influence of nanoparticulate size.

    PubMed

    Lou, Sha; Ye, Jia-ying; Li, Ke-qiang; Wu, Aiguo

    2012-03-07

    Four different sized gold nanoparticles (14 nm, 16 nm, 35 nm and 38 nm) were prepared to conjugate an antibody for a gold nanoparticle-based immunochromatographic assay which has many applications in both basic research and clinical diagnosis. This study focuses on the conjugation efficiency of the antibody with different sized gold nanoparticles. The effect of factors such as pH value and concentration of antibody has been quantificationally discussed using spectra methods after adding 1 wt% NaCl which induced gold nanoparticle aggregation. It was found that different sized gold nanoparticles had different conjugation efficiencies under different pH values and concentrations of antibody. Among the four sized gold nanoparticles, the 16 nm gold nanoparticles have the minimum requirement for antibody concentrations to avoid aggregation comparing to other sized gold nanoparticles but are less sensitive for detecting the real sample compared to the 38 nm gold nanoparticles. Consequently, different sized gold nanoparticles should be labeled with antibody under optimal pH value and optimal concentrations of antibody. It will be helpful for the application of antibody-labeled gold nanoparticles in the fields of clinic diagnosis, environmental analysis and so on in future.

  20. Structural design of high-performance capacitive accelerometers using parametric optimization with uncertainties

    NASA Astrophysics Data System (ADS)

    Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli

    2017-03-01

    Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.

  1. Optimal decision making modeling for copper-matte Peirce-Smith converting process by means of data mining

    NASA Astrophysics Data System (ADS)

    Song, Yanpo; Peng, Xiaoqi; Tang, Ying; Hu, Zhikun

    2013-07-01

    To improve the operation level of copper converter, the approach to optimal decision making modeling for coppermatte converting process based on data mining is studied: in view of the characteristics of the process data, such as containing noise, small sample size and so on, a new robust improved ANN (artificial neural network) modeling method is proposed; taking into account the application purpose of decision making model, three new evaluation indexes named support, confidence and relative confidence are proposed; using real production data and the methods mentioned above, optimal decision making model for blowing time of S1 period (the 1st slag producing period) are developed. Simulation results show that this model can significantly improve the converting quality of S1 period, increase the optimal probability from about 70% to about 85%.

  2. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  3. Formulation and evaluation of optimized oxybenzone microsponge gel for topical delivery.

    PubMed

    Pawar, Atmaram P; Gholap, Aditya P; Kuchekar, Ashwin B; Bothiraja, C; Mali, Ashwin J

    2015-01-01

    Background. Oxybenzone, a broad spectrum sunscreen agent widely used in the form of lotion and cream, has been reported to cause skin irritation, dermatitis, and systemic absorption. Aim. The objective of the present study was to formulate oxybenzone loaded microsponge gel for enhanced sun protection factor with reduced toxicity. Material and Method. Microsponge for topical delivery of oxybenzone was successfully prepared by quasiemulsion solvent diffusion method. The effects of ethyl cellulose and dichloromethane were optimized by the 3(2) factorial design. The optimized microsponges were dispersed into the hydrogel and further evaluated. Results. The microsponges were spherical with pore size in the range of 0.10-0.22 µm. The optimized formulation possesses the particle size and entrapment efficiency of 72 ± 0.77 µm and 96.9 ± 0.52%, respectively. The microsponge gel showed the controlled release and was nonirritant to the rat skin. In creep recovery test it had shown highest recovery indicating elasticity. The controlled release of oxybenzone from microsponge and barrier effect of gel result in prolonged retention of oxybenzone with reduced permeation activity. Conclusion. Evaluation study revealed remarkable and enhanced topical retention of oxybenzone for prolonged period of time. It also showed the enhanced sun protection factor compared to the marketed preparation with reduced irritation and toxicity.

  4. Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.

    PubMed

    Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew

    2017-08-10

    Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.

  5. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  6. Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation

    NASA Astrophysics Data System (ADS)

    Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.

    1998-01-01

    A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.

  7. Optimization of hole generation in Ti/CFRP stacks

    NASA Astrophysics Data System (ADS)

    Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.

    2018-03-01

    The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.

  8. Determining size-specific emission factors for environmental tobacco smoke particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klepeis, Neil E.; Apte, Michael G.; Gundel, Lara A.

    Because size is a major controlling factor for indoor airborne particle behavior, human particle exposure assessments will benefit from improved knowledge of size-specific particle emissions. We report a method of inferring size-specific mass emission factors for indoor sources that makes use of an indoor aerosol dynamics model, measured particle concentration time series data, and an optimization routine. This approach provides--in addition to estimates of the emissions size distribution and integrated emission factors--estimates of deposition rate, an enhanced understanding of particle dynamics, and information about model performance. We applied the method to size-specific environmental tobacco smoke (ETS) particle concentrations measured everymore » minute with an 8-channel optical particle counter (PMS-LASAIR; 0.1-2+ micrometer diameters) and every 10 or 30 min with a 34-channel differential mobility particle sizer (TSI-DMPS; 0.01-1+ micrometer diameters) after a single cigarette or cigar was machine-smoked inside a low air-exchange-rate 20 m{sup 3} chamber. The aerosol dynamics model provided good fits to observed concentrations when using optimized values of mass emission rate and deposition rate for each particle size range as input. Small discrepancies observed in the first 1-2 hours after smoking are likely due to the effect of particle evaporation, a process neglected by the model. Size-specific ETS particle emission factors were fit with log-normal distributions, yielding an average mass median diameter of 0.2 micrometers and an average geometric standard deviation of 2.3 with no systematic differences between cigars and cigarettes. The equivalent total particle emission rate, obtained integrating each size distribution, was 0.2-0.7 mg/min for cigars and 0.7-0.9 mg/min for cigarettes.« less

  9. Influence of fundamental mode fill factor on disk laser output power and laser beam quality

    NASA Astrophysics Data System (ADS)

    Cheng, Zhiyong; Yang, Zhuo; Shao, Xichun; Li, Wei; Zhu, Mengzhen

    2017-11-01

    An three-dimensional numerical model based on finite element method and Fox-Li method with angular spectrum diffraction theoy is developed to calculate the output power and power density distribution of Yb:YAG disk laser. We invest the influence of fundamental mode fill factor(the ratio of fundamental mode size and pump spot size) on the output power and laser beam quality. Due to aspherical aberration and soft aperture effect in laser disk, high beam quality can be achieve with relative lower efficiency. The highest output power of fundamental laser mode is influenced by the fundamental mode fill factor. Besides we find that optimal mode fill factor increase with pump spot size.

  10. Extinction spectra of suspensions of microspheres: determination of the spectral refractive index and particle size distribution with nanometer accuracy.

    PubMed

    Gienger, Jonas; Bär, Markus; Neukammer, Jörg

    2018-01-10

    A method is presented to infer simultaneously the wavelength-dependent real refractive index (RI) of the material of microspheres and their size distribution from extinction measurements of particle suspensions. To derive the averaged spectral optical extinction cross section of the microspheres from such ensemble measurements, we determined the particle concentration by flow cytometry to an accuracy of typically 2% and adjusted the particle concentration to ensure that perturbations due to multiple scattering are negligible. For analysis of the extinction spectra, we employ Mie theory, a series-expansion representation of the refractive index and nonlinear numerical optimization. In contrast to other approaches, our method offers the advantage to simultaneously determine size, size distribution, and spectral refractive index of ensembles of microparticles including uncertainty estimation.

  11. Efficiency and optimal size of hospitals: Results of a systematic search

    PubMed Central

    Guglielmo, Annamaria

    2017-01-01

    Background National Health Systems managers have been subject in recent years to considerable pressure to increase concentration and allow mergers. This pressure has been justified by a belief that larger hospitals lead to lower average costs and better clinical outcomes through the exploitation of economies of scale. In this context, the opportunity to measure scale efficiency is crucial to address the question of optimal productive size and to manage a fair allocation of resources. Methods and findings This paper analyses the stance of existing research on scale efficiency and optimal size of the hospital sector. We performed a systematic search of 45 past years (1969–2014) of research published in peer-reviewed scientific journals recorded by the Social Sciences Citation Index concerning this topic. We classified articles by the journal’s category, research topic, hospital setting, method and primary data analysis technique. Results showed that most of the studies were focussed on the analysis of technical and scale efficiency or on input / output ratio using Data Envelopment Analysis. We also find increasing interest concerning the effect of possible changes in hospital size on quality of care. Conclusions Studies analysed in this review showed that economies of scale are present for merging hospitals. Results supported the current policy of expanding larger hospitals and restructuring/closing smaller hospitals. In terms of beds, studies reported consistent evidence of economies of scale for hospitals with 200–300 beds. Diseconomies of scale can be expected to occur below 200 beds and above 600 beds. PMID:28355255

  12. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  13. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  14. Optimization of cooling strategy and seeding by FBRM analysis of batch crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Dejiang; Liu, Lande; Xu, Shijie; Du, Shichao; Dong, Weibing; Gong, Junbo

    2018-03-01

    A method is presented for optimizing the cooling strategy and seed loading simultaneously. Focused beam reflectance measurement (FBRM) was used to determine the approximating optimal cooling profile. Using these results in conjunction with constant growth rate assumption, modified Mullin-Nyvlt trajectory could be calculated. This trajectory could suppress secondary nucleation and has the potential to control product's polymorph distribution. Comparing with linear and two step cooling, modified Mullin-Nyvlt trajectory have a larger size distribution and a better morphology. Based on the calculating results, the optimized seed loading policy was also developed. This policy could be useful for guiding the batch crystallization process.

  15. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  16. Retrieval of ice crystals' mass from ice water content and particle distribution measurements: a numerical optimization approach

    NASA Astrophysics Data System (ADS)

    Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m - IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in order to evaluate the behavior of the iterative algorithm, the influence of data noise on the quality of the results, and to set up a regularization strategy. Therefore, 3D synthetic crystals have been generated and numerically processed to recreate the noise caused by 2D projections of randomly oriented 3D crystals and by the discretization of the PSD into size classes of predefined width. Subsequently, the method is applied to the experimental datasets and the comparison between the retrieved TWC (this methodology) and the measured ones (IKP-2 data) will enable the evaluation of the consistency and accuracy of the mass solution retrieved by the numerical optimization approach as well as preliminary assessment of the influence of temperature and dynamical parameters on crystals' masses.

  17. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  18. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  19. Pixel-based OPC optimization based on conjugate gradients.

    PubMed

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  20. Optimization of perfluoro nano-scale emulsions: the importance of particle size for enhanced oxygen transfer in biomedical applications.

    PubMed

    Fraker, Christopher A; Mendez, Armando J; Inverardi, Luca; Ricordi, Camillo; Stabler, Cherie L

    2012-10-01

    Nano-scale emulsification has long been utilized by the food and cosmetics industry to maximize material delivery through increased surface area to volume ratios. More recently, these methods have been employed in the area of biomedical research to enhance and control the delivery of desired agents, as in perfluorocarbon emulsions for oxygen delivery. In this work, we evaluate critical factors for the optimization of PFC emulsions for use in cell-based applications. Cytotoxicity screening revealed minimal cytotoxicity of components, with the exception of one perfluorocarbon utilized for emulsion manufacture, perfluorooctylbromide (PFOB), and specific w% limitations of PEG-based surfactants utilized. We optimized the manufacture of stable nano-scale emulsions via evaluation of: component materials, emulsification time and pressure, and resulting particle size and temporal stability. The initial emulsion size was greatly dependent upon the emulsion surfactant tested, with pluronics providing the smallest size. Temporal stability of the nano-scale emulsions was directly related to the perfluorocarbon utilized, with perfluorotributylamine, FC-43, providing a highly stable emulsion, while perfluorodecalin, PFD, coalesced over time. The oxygen mass transfer, or diffusive permeability, of the resulting emulsions was also characterized. Our studies found particle size to be the critical factor affecting oxygen mass transfer, as increased micelle size resulted in reduced oxygen diffusion. Overall, this work demonstrates the importance of accurate characterization of emulsification parameters in order to generate stable, reproducible emulsions with the desired bio-delivery properties. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. TH-C-12A-04: Dosimetric Evaluation of a Modulated Arc Technique for Total Body Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsiamas, P; Czerminska, M; Makrigiorgos, G

    2014-06-15

    Purpose: A simplified Total Body Irradiation (TBI) was developed to work with minimal requirements in a compact linac room without custom motorized TBI couch. Results were compared to our existing fixed-gantry double 4 MV linac TBI system with prone patient and simultaneous AP/PA irradiation. Methods: Modulated arc irradiates patient positioned in prone/supine positions along the craniocaudal axis. A simplified inverse planning method developed to optimize dose rate as a function of gantry angle for various patient sizes without the need of graphical 3D treatment planning system. This method can be easily adapted and used with minimal resources. Fixed maximum fieldmore » size (40×40 cm2) is used to decrease radiation delivery time. Dose rate as a function of gantry angle is optimized to result in uniform dose inside rectangular phantoms of various sizes and a custom VMAT DICOM plans were generated using a DICOM editor tool. Monte Carlo simulations, film and ionization chamber dosimetry for various setups were used to derive and test an extended SSD beam model based on PDD/OAR profiles for Varian 6EX/ TX. Measurements were obtained using solid water phantoms. Dose rate modulation function was determined for various size patients (100cm − 200cm). Depending on the size of the patient arc range varied from 100° to 120°. Results: A PDD/OAR based beam model for modulated arc TBI therapy was developed. Lateral dose profiles produced were similar to profiles of our existing TBI facility. Calculated delivery time and full arc depended on the size of the patient (∼8min/ 100° − 10min/ 120°, 100 cGy). Dose heterogeneity varied by about ±5% − ±10% depending on the patient size and distance to the surface (buildup region). Conclusion: TBI using simplified modulated arc along craniocaudal axis of different size patients positioned on the floor can be achieved without graphical / inverse 3D planning.« less

  2. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  3. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  4. Adaptive Modeling Procedure Selection by Data Perturbation.

    PubMed

    Zhang, Yongli; Shen, Xiaotong

    2015-10-01

    Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.

  5. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  6. A novel algorithm for fast and efficient multifocus wavefront shaping

    NASA Astrophysics Data System (ADS)

    Fayyaz, Zahra; Nasiriavanaki, Mohammadreza

    2018-02-01

    Wavefront shaping using spatial light modulator (SLM) is a popular method for focusing light through a turbid media, such as biological tissues. Usually, in iterative optimization methods, due to the very large number of pixels in SLM, larger pixels are formed, bins, and the phase value of the bins are changed to obtain an optimum phase map, hence a focus. In this study an efficient optimization algorithm is proposed to obtain an arbitrary map of focus utilizing all the SLM pixels or small bin sizes. The application of such methodology in dermatology, hair removal in particular, is explored and discussed

  7. The material from Lampung as coarse aggregate to substitute andesite for concrete-making

    NASA Astrophysics Data System (ADS)

    Amin, M.; Supriyatna, Y. I.; Sumardi, S.

    2018-01-01

    Andesite stone is usually used for split stone material in the concrete making. However, its availability is decreasing. Lampung province has natural resources that can be used for coarse aggregate materials to substitute andesite stone. These natural materials include limestone, feldspar stone, basalt, granite, and slags from iron processing waste. Therefore, a research on optimizing natural materials in Lampung to substitute andesite stone for concrete making is required. This research used laboratory experiment method. The research activities included making cubical object samples of 150 x 150 x 150 mm with material composition referring to a standard of K.200 and w/c 0.61. Concrete making by using varying types of aggregates (basalt, limestone, slag) and aggregate sizes (A = 5-15 mm, B = 15-25 mm, and 25-50 mm) was followed by compressive strength test. The results showed that the obtained optimal compressive strengths for basalt were 24.47 MPa for 50-150 mm aggregate sizes, 21.2 MPa for 15-25 mm aggregate sizes, and 20.7 MPa for 25-50 mm aggregate sizes. These results of basalt compressive strength values were higher than the same result for andesite (19.69 MPa for 50-150 mm aggregate sizes), slag (22.72 MPa for 50-150 mm aggregate sizes), and limestone (19.69 Mpa for 50-150 mm aggregate sizes). These results indicated that basalt, limestone, and slag aggregates were good enough to substitute andesite as materials for concrete making. Therefore, natural resources in Lampung can be optimized as construction materials in concrete making.

  8. Cost effective campaigning in social networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2016-05-01

    Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.

  9. Design and operation of a bio-inspired micropump based on blood-sucking mechanism of mosquitoes

    NASA Astrophysics Data System (ADS)

    Leu, Tzong-Shyng; Kao, Ruei-Hung

    2018-05-01

    The study is to develop a novel bionic micropump, mimicking blood-suck mechanism of mosquitos with a similar efficiency of 36%. The micropump is produced by using micro-electro-mechanical system (MEMS) technology, PDMS (polydimethylsiloxane) to fabricate the microchannel, and an actuator membrane made by Fe-PDMS. It employs an Nd-FeB permanent magnet and PZT to actuate the Fe-PDMS membrane for generating flow rate. A lumped model theory and the Taguchi method are used for numerical simulation of pulsating flow in the micropump. Also focused is to change the size of mosquito mouth for identifying the best waveform for the transient flow processes. Based on computational results of channel size and the Taguchi method, an optimization actuation waveform is identified. The maximum pumping flow rate is 23.5 μL/min and the efficiency is 86%. The power density of micropump is about 8 times of that produced by mosquito’s suction. In addition to using theoretical design of the channel size, also combine with Taguchi method and asymmetric actuation to find the optimization actuation waveform, the experimental result shows the maximum pumping flowrate is 23.5 μL/min and efficiency is 86%, moreover, the power density of micropump is 8 times higher than mosquito’s.

  10. Determining particle size and water content by near-infrared spectroscopy in the granulation of naproxen sodium.

    PubMed

    Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter

    2018-03-20

    Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, J; Culberson, W; Bender, E

    2016-06-15

    Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less

  12. Automatic vision-based grain optimization and analysis of multi-crystalline solar wafers using hierarchical region growing

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Tsai, Du-Ming; Chuang, Wei-Che

    2017-04-01

    Solar power has become an attractive alternative source of energy. The multi-crystalline solar cell has been widely accepted in the market because it has a relatively low manufacturing cost. Multi-crystalline solar wafers with larger grain sizes and fewer grain boundaries are higher quality and convert energy more efficiently than mono-crystalline solar cells. In this article, a new image processing method is proposed for assessing the wafer quality. An adaptive segmentation algorithm based on region growing is developed to separate the closed regions of individual grains. Using the proposed method, the shape and size of each grain in the wafer image can be precisely evaluated. Two measures of average grain size are taken from the literature and modified to estimate the average grain size. The resulting average grain size estimate dictates the quality of the crystalline solar wafers and can be considered a viable quantitative indicator of conversion efficiency.

  13. Treatment planning, optimization, and beam delivery technqiues for intensity modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Sengbusch, Evan R.

    Physical properties of proton interactions in matter give them a theoretical advantage over photons in radiation therapy for cancer treatment, but they are seldom used relative to photons. The primary barriers to wider acceptance of proton therapy are the technical feasibility, size, and price of proton therapy systems. Several aspects of the proton therapy landscape are investigated, and new techniques for treatment planning, optimization, and beam delivery are presented. The results of these investigations suggest a means by which proton therapy can be delivered more efficiently, effectively, and to a much larger proportion of eligible patients. An analysis of the existing proton therapy market was performed. Personal interviews with over 30 radiation oncology leaders were conducted with regard to the current and future use of proton therapy. In addition, global proton therapy market projections are presented. The results of these investigations serve as motivation and guidance for the subsequent development of treatment system designs and treatment planning, optimization, and beam delivery methods. A major factor impacting the size and cost of proton treatment systems is the maximum energy of the accelerator. Historically, 250 MeV has been the accepted value, but there is minimal quantitative evidence in the literature that supports this standard. A retrospective study of 100 patients is presented that quantifies the maximum proton kinetic energy requirements for cancer treatment, and the impact of those results with regard to treatment system size, cost, and neutron production is discussed. This study is subsequently expanded to include 100 cranial stereotactic radiosurgery (SRS) patients, and the results are discussed in the context of a proposed dedicated proton SRS treatment system. Finally, novel proton therapy optimization and delivery techniques are presented. Algorithms are developed that optimize treatment plans over beam angle, spot size, spot spacing, beamlet weight, the number of delivered beamlets, and the number of delivery angles. These methods are evaluated via treatment planning studies including left-sided whole breast irradiation, lung stereotactic body radiotherapy, nasopharyngeal carcinoma, and whole brain radiotherapy with hippocampal avoidance. Improvements in efficiency and efficacy relative to traditional proton therapy and intensity modulated photon radiation therapy are discussed.

  14. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  15. Nano-sized Contrast Agents to Non-Invasively Detect Renal Inflammation by Magnetic Resonance Imaging

    PubMed Central

    Thurman, Joshua M.; Serkova, Natalie J.

    2013-01-01

    Several molecular imaging methods have been developed that employ nano-sized contrast agents to detect markers of inflammation within tissues. Renal inflammation contributes to disease progression in a wide range of autoimmune and inflammatory diseases, and a biopsy is currently the only method of definitively diagnosing active renal inflammation. However, the development of new molecular imaging methods that employ contrast agents capable of detecting particular immune cells or protein biomarkers will allow clinicians to evaluate inflammation throughout the kidneys, and to assess a patient's response to immunomodulatory drugs. These imaging tools will improve our ability to validate new therapies and to optimize the treatment of individual patients with existing therapies. This review describes the clinical need for new methods of monitoring renal inflammation, and recent advances in the development of nano-sized contrast agents for detection of inflammatory markers of renal disease. PMID:24206601

  16. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  17. Sizing a rainwater harvesting cistern by minimizing costs

    NASA Astrophysics Data System (ADS)

    Pelak, Norman; Porporato, Amilcare

    2016-10-01

    Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.

  18. Effect of heliostat size on the levelized cost of electricity for power towers

    NASA Astrophysics Data System (ADS)

    Pidaparthi, Arvind; Hoffmann, Jaap

    2017-06-01

    The objective of this study is to investigate the effects of heliostat size on the levelized cost of electricity (LCOE) for power tower plants. These effects are analyzed in a power tower with a net capacity of 100 MWe, 8 hours of thermal energy storage and a solar multiple of 1.8 in Upington, South Africa. A large, medium and a small size heliostat with a total area of 115.56 m2, 43.3 m2 and 15.67 m2 respectively are considered for comparison. A radial-staggered pattern and an external cylindrical receiver are considered for the heliostat field layouts. The optical performance of the optimized heliostat field layouts has been evaluated by the Hermite (analytical) method using SolarPILOT, a tool used for the generation and optimization of the heliostat field layout. The heliostat cost per unit is calculated separately for the three different heliostat sizes and the effects due to size scaling, learning curve benefits and the price index is included. The annual operation and maintenance (O&M) costs are estimated separately for the three heliostat fields, where the number of personnel required in the field is determined by the number of heliostats in the field. The LCOE values are used as a figure of merit to compare the different heliostat sizes. The results, which include the economic and the optical performance along with the annual O&M costs, indicate that lowest LCOE values are achieved by the medium size heliostat with an area of 43.3 m2 for this configuration. This study will help power tower developers determine the optimal heliostat size for power tower plants currently in the development stage.

  19. Myocardial infarct sizing by late gadolinium-enhanced MRI: Comparison of manual, full-width at half-maximum, and n-standard deviation methods.

    PubMed

    Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien

    2016-11-01

    To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.

  20. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  1. A new algorithm for real-time optimal dispatch of active and reactive power generation retaining nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, L.; Rao, N.D.

    1983-04-01

    This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less

  2. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  3. Mixedness determination of rare earth-doped ceramics

    NASA Astrophysics Data System (ADS)

    Czerepinski, Jennifer H.

    The lack of chemical uniformity in a powder mixture, such as clustering of a minor component, can lead to deterioration of materials properties. A method to determine powder mixture quality is to correlate the chemical homogeneity of a multi-component mixture with its particle size distribution and mixing method. This is applicable to rare earth-doped ceramics, which require at least 1-2 nm dopant ion spacing to optimize optical properties. Mixedness simulations were conducted for random heterogeneous mixtures of Nd-doped LaF3 mixtures using the Concentric Shell Model of Mixedness (CSMM). Results indicate that when the host to dopant particle size ratio is 100, multi-scale concentration variance is optimized. In order to verify results from the model, experimental methods that probe a mixture at the micro, meso, and macro scales are needed. To directly compare CSMM results experimentally, an image processing method was developed to calculate variance profiles from electron images. An in-lens (IL) secondary electron image is subtracted from the corresponding Everhart-Thornley (ET) secondary electron image in a Field-Emission Scanning Electron Microscope (FESEM) to produce two phases and pores that can be quantified with 50 nm spatial resolution. A macro was developed to quickly analyze multi-scale compositional variance from these images. Results for a 50:50 mixture of NdF3 and LaF3 agree with the computational model. The method has proven to be applicable only for mixtures with major components and specific particle morphologies, but the macro is useful for any type of imaging that produces excellent phase contrast, such as confocal microscopy. Fluorescence spectroscopy was used as an indirect method to confirm computational results for Nd-doped LaF3 mixtures. Fluorescence lifetime can be used as a quantitative method to indirectly measure chemical homogeneity when the limits of electron microscopy have been reached. Fluorescence lifetime represents the compositional fluctuations of a dopant on the nanoscale while accounting for billions of particles in a fast, non-destructive manner. The significance of this study will show how small-scale fluctuations in homogeneity limit the optimization of optical properties, which can be improved by the proper selection of particle size and mixing method.

  4. A Figure-of-Merit for Design and Optimization of Inductive Power Transmission Links for Millimeter-Sized Biomedical Implants.

    PubMed

    Ibrahim, Ahmed; Kiani, Mehdi

    2016-12-01

    Power transmission efficiency (PTE) has been the key parameter for wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. It has been suggested that for mm-sized implants increasing the power carrier frequency (f p ) of the WPT link to hundreds of MHz improves PTE. However, increasing f p significantly reduces the maximum allowable power that can be transmitted under the specific absorption rate (SAR) constraints. This paper presents a new figure-of-merit (FoM) and a design methodology for optimal WPT to mm-sized implants via inductive coupling by striking a balance between PTE and maximum delivered power under SAR constraints (P L,SAR ). First, the optimal mm-sized receiver (Rx) coil geometry is identified for a wide range of f p to maximize the Rx coil quality factor (Q). Secondly, the optimal transmitter (Tx) coil geometry and f p are found to maximize the proposed FoM under a low-loss Rx matched-load condition. Finally, proper Tx coil and tissue spacing is identified based on FoM at the optimal f p . We demonstrate that f p in order of tens of MHz still offer higher P L,SAR and FoM, which is key in applications that demand high power such as optogenetics. An inductive link to power a 1 mm 3 implant was designed based on our FoM and verified through full-wave electromagnetic field simulations and measurements using de-embedding method. In our measurements, an Rx coil with 1 mm diameter, located 10 mm inside the tissue, achieved PTE and P L,SAR of 1.4% and 2.2 mW at f p of 20 MHz, respectively.

  5. Layout optimization with algebraic multigrid methods

    NASA Technical Reports Server (NTRS)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  6. Mechanistic analysis of Zein nanoparticles/PLGA triblock in situ forming implants for glimepiride.

    PubMed

    Ahmed, Osama Abdelhakim Aly; Zidan, Ahmed Samir; Khayat, Maan

    2016-01-01

    The study aims at applying pharmaceutical nanotechnology and D-optimal fractional factorial design to screen and optimize the high-risk variables affecting the performance of a complex drug delivery system consisting of glimepiride-Zein nanoparticles and inclusion of the optimized formula with thermoresponsive triblock copolymers in in situ gel. Sixteen nanoparticle formulations were prepared by liquid-liquid phase separation method according to the D-optimal fractional factorial design encompassing five variables at two levels. The responses investigated were glimepiride entrapment capacity (EC), particle size and size distribution, zeta potential, and in vitro drug release from the prepared nanoparticles. Furthermore, the feasibility of embedding the optimized Zein-based glimepiride nanoparticles within thermoresponsive triblock copolymers poly(lactide-co-glycolide)-block-poly(ethylene glycol)-block-poly(lactide-co-glycolide) in in situ gel was evaluated for controlling glimepiride release rate. Through the systematic optimization phase, improvement of glimepiride EC of 33.6%, nanoparticle size of 120.9 nm with a skewness value of 0.2, zeta potential of 11.1 mV, and sustained release features of 3.3% and 17.3% drug released after 2 and 24 hours, respectively, were obtained. These desirability functions were obtained at Zein and glimepiride loadings of 50 and 75 mg, respectively, utilizing didodecyldimethylammonium bromide as a stabilizer at 0.1% and 90% ethanol as a common solvent. Moreover, incorporating this optimized formulation in triblock copolymers-based in situ gel demonstrated pseudoplastic behavior with reduction of drug release rate as the concentration of polymer increased. This approach to control the release of glimepiride using Zein nanoparticles/triblock copolymers-based in situ gel forming intramuscular implants could be useful for improving diabetes treatment effectiveness.

  7. Oral bioavailability enhancement of raloxifene by developing microemulsion using D-optimal mixture design: optimization and in-vivo pharmacokinetic study.

    PubMed

    Shah, Nirmal; Seth, Avinashkumar; Balaraman, R; Sailor, Girish; Javia, Ankur; Gohil, Dipti

    2018-04-01

    The objective of this work was to utilize a potential of microemulsion for the improvement in oral bioavailability of raloxifene hydrochloride, a BCS class-II drug with 2% bioavailability. Drug-loaded microemulsion was prepared by water titration method using Capmul MCM C8, Tween 20, and Polyethylene glycol 400 as oil, surfactant, and co-surfactant respectively. The pseudo-ternary phase diagram was constructed between oil and surfactants mixture to obtain appropriate components and their concentration ranges that result in large existence area of microemulsion. D-optimal mixture design was utilized as a statistical tool for optimization of microemulsion considering oil, S mix , and water as independent variables with percentage transmittance and globule size as dependent variables. The optimized formulation showed 100 ± 0.1% transmittance and 17.85 ± 2.78 nm globule size which was identically equal with the predicted values of dependent variables given by the design expert software. The optimized microemulsion showed pronounced enhancement in release rate compared to plain drug suspension following diffusion controlled release mechanism by the Higuchi model. The formulation showed zeta potential of value -5.88 ± 1.14 mV that imparts good stability to drug loaded microemulsion dispersion. Surface morphology study with transmission electron microscope showed discrete spherical nano sized globules with smooth surface. In-vivo pharmacokinetic study of optimized microemulsion formulation in Wistar rats showed 4.29-fold enhancements in bioavailability. Stability study showed adequate results for various parameters checked up to six months. These results reveal the potential of microemulsion for significant improvement in oral bioavailability of poorly soluble raloxifene hydrochloride.

  8. Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization.

    PubMed

    Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford

    2018-04-01

    Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.

  9. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  10. Structural Optimization of Triboelectric Nanogenerator for Harvesting Water Wave Energy.

    PubMed

    Jiang, Tao; Zhang, Li Min; Chen, Xiangyu; Han, Chang Bao; Tang, Wei; Zhang, Chi; Xu, Liang; Wang, Zhong Lin

    2015-12-22

    Ocean waves are one of the most abundant energy sources on earth, but harvesting such energy is rather challenging due to various limitations of current technologies. Recently, networks formed by triboelectric nanogenerator (TENG) have been proposed as a promising technology for harvesting water wave energy. In this work, a basic unit for the TENG network was studied and optimized, which has a box structure composed of walls made of TENG composed of a wavy-structured Cu-Kapton-Cu film and two FEP thin films, with a metal ball enclosed inside. By combination of the theoretical calculations and experimental studies, the output performances of the TENG unit were investigated for various structural parameters, such as the size, mass, or number of the metal balls. From the viewpoint of theory, the output characteristics of TENG during its collision with the ball were numerically calculated by the finite element method and interpolation method, and there exists an optimum ball size or mass to reach maximized output power and electric energy. Moreover, the theoretical results were well verified by the experimental tests. The present work could provide guidance for structural optimization of wavy-structured TENGs for effectively harvesting water wave energy toward the dream of large-scale blue energy.

  11. A generalized sizing method for revolutionary concepts under probabilistic design constraints

    NASA Astrophysics Data System (ADS)

    Nam, Taewoo

    Internal combustion (IC) engines that consume hydrocarbon fuels have dominated the propulsion systems of air-vehicles for the first century of aviation. In recent years, however, growing concern over rapid climate changes and national energy security has galvanized the aerospace community into delving into new alternatives that could challenge the dominance of the IC engine. Nevertheless, traditional aircraft sizing methods have significant shortcomings for the design of such unconventionally powered aircraft. First, the methods are specialized for aircraft powered by IC engines, and thus are not flexible enough to assess revolutionary propulsion concepts that produce propulsive thrust through a completely different energy conversion process. Another deficiency associated with the traditional methods is that a user of these methods must rely heavily on experts' experience and advice for determining appropriate design margins. However, the introduction of revolutionary propulsion systems and energy sources is very likely to entail an unconventional aircraft configuration, which inexorably disqualifies the conjecture of such "connoisseurs" as a means of risk management. Motivated by such deficiencies, this dissertation aims at advancing two aspects of aircraft sizing: (1) to develop a generalized aircraft sizing formulation applicable to a wide range of unconventionally powered aircraft concepts and (2) to formulate a probabilistic optimization technique that is able to quantify appropriate design margins that are tailored towards the level of risk deemed acceptable to a decision maker. A more generalized aircraft sizing formulation, named the Architecture Independent Aircraft Sizing Method (AIASM), was developed for sizing revolutionary aircraft powered by alternative energy sources by modifying several assumptions of the traditional aircraft sizing method. Along with advances in deterministic aircraft sizing, a non-deterministic sizing technique, named the Probabilistic Aircraft Sizing Method (PASM), was developed. The method allows one to quantify adequate design margins to account for the various sources of uncertainty via the application of the chance-constrained programming (CCP) strategy to AIASM. In this way, PASM can also provide insights into a good compromise between cost and safety.

  12. Preparation of Salicylic Acid Loaded Nanostructured Lipid Carriers Using Box-Behnken Design: Optimization, Characterization and Physicochemical Stability.

    PubMed

    Pantub, Ketrawee; Wongtrakul, Paveena; Janwitayanuchit, Wicharn

    2017-01-01

    Nanostructured lipid carriers loaded salicylic acid (NLCs-SA) were developed and optimized by using the design of experiment (DOE). Box-Behnken experimental design of 3-factor, 3-level was applied for optimization of nanostructured lipid carriers prepared by emulsification method. The independent variables were total lipid concentration (X 1 ), stearic acid to Lexol ® GT-865 ratio (X 2 ) and Tween ® 80 concentration (X 3 ) while the particle size was a dependent variable (Y). Box-Behnken design could create 15 runs by setting response optimizer as minimum particle size. The optimized formulation consists of 10% of total lipid, a mixture of stearic acid and capric/caprylic triglyceride at a 4:1 ratio, and 25% of Tween ® 80 which the formulation was applied in order to prepare in both loaded and unloaded salicylic acid. After preparation for 24 hours, the particle size of loaded and unloaded salicylic acid was 189.62±1.82 nm and 369.00±3.37 nm, respectively. Response surface analysis revealed that the amount of total lipid is a main factor which could affect the particle size of lipid carriers. In addition, the stability studies showed a significant change in particle size by time. Compared to unloaded nanoparticles, the addition of salicylic acid into the particles resulted in physically stable dispersion. After 30 days, sedimentation of unloaded lipid carriers was clearly observed. Absolute values of zeta potential of both systems were in the range of 3 to 18 mV since non-ionic surfactant, Tween ® 80, providing steric barrier was used. Differential thermograms indicated a shift of endothermic peak from 55°C for α-crystal form in freshly prepared samples to 60°C for β´-crystal form in storage samples. It was found that the presence of capric/caprylic triglyceride oil could enhance encapsulation efficiency up to 80% and facilitate stability of the particles.

  13. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  14. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  15. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  16. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  17. Modeling and Optimization for Morphing Wing Concept Generation

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2007-01-01

    This report consists of two major parts: 1) the approach to develop morphing wing weight equations, and 2) the approach to size morphing aircraft. Combined, these techniques allow the morphing aircraft to be sized with estimates of the morphing wing weight that are more credible than estimates currently available; aircraft sizing results prior to this study incorporated morphing wing weight estimates based on general heuristics for fixed-wing flaps (a comparable "morphing" component) but, in general, these results were unsubstantiated. This report will show that the method of morphing wing weight prediction does, in fact, drive the aircraft sizing code to different results and that accurate morphing wing weight estimates are essential to credible aircraft sizing results.

  18. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    PubMed

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  20. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  1. Can physical activity improve peak bone mass?

    PubMed

    Specker, Bonny; Minett, Maggie

    2013-09-01

    The pediatric origin of osteoporosis has led many investigators to focus on determining factors that influence bone gain during growth and methods for optimizing this gain. Bone responds to bone loading activities by increasing mass or size. Overall, pediatric studies have found a positive effect of bone loading on bone size and accrual, but the types of loads necessary for a bone response have only recently been investigated in human studies. Findings indicate that responses vary by sex, maturational status, and are site-specific. Estrogen status, body composition, and nutritional status also may influence the bone response to loading. Despite the complex interrelationships among these various factors, it is prudent to conclude that increased physical activity throughout life is likely to optimize bone health.

  2. Power Distribution System Planning with GIS Consideration

    NASA Astrophysics Data System (ADS)

    Wattanasophon, Sirichai; Eua-Arporn, Bundhit

    This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.

  3. Value for money? A contingent valuation study of the optimal size of the Swedish health care budget.

    PubMed

    Eckerlund, I; Johannesson, M; Johansson, P O; Tambour, M; Zethraeus, N

    1995-11-01

    The contingent valuation method has been developed in the environmental field to measure the willingness to pay for environmental changes using survey methods. In this exploratory study the contingent valuation method was used to analyse how much individuals are willing to spend in total in the form of taxes for health care in Sweden, i.e. to analyse the optimal size of the 'health care budget' in Sweden. A binary contingent valuation question was included in a telephone survey of a random sample of 1260 households in Sweden. With a conservative interpretation of the data the result shows that 50% of the respondents would accept an increased tax payment to health care of about SEK 60 per month ($1 = SEK 8). It is concluded that the results indicate that the population overall thinks that the current spending on health care in Sweden is on a reasonable level. There seems to be a willingness to increase the tax payments somewhat, but major increases does not seem acceptable to a majority of the population.

  4. Experimental evaluation of optimization method for developing ultraviolet barrier coatings

    NASA Astrophysics Data System (ADS)

    Gonome, Hiroki; Okajima, Junnosuke; Komiya, Atsuki; Maruyama, Shigenao

    2014-01-01

    Ultraviolet (UV) barrier coatings can be used to protect many industrial products from UV attack. This study introduces a method of optimizing UV barrier coatings using pigment particles. The radiative properties of the pigment particles were evaluated theoretically, and the optimum particle size was decided from the absorption efficiency and the back-scattering efficiency. UV barrier coatings were prepared with zinc oxide (ZnO) and titanium dioxide (TiO2). The transmittance of the UV barrier coating was calculated theoretically. The radiative transfer in the UV barrier coating was modeled using the radiation element method by ray emission model (REM2). In order to validate the calculated results, the transmittances of these coatings were measured by a spectrophotometer. A UV barrier coating with a low UV transmittance and high VIS transmittance could be achieved. The calculated transmittance showed a similar spectral tendency with the measured one. The use of appropriate particles with optimum size, coating thickness and volume fraction will result in effective UV barrier coatings. UV barrier coatings can be achieved by the application of optical engineering.

  5. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.

  6. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  7. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    PubMed Central

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra

    2015-01-01

    The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818

  8. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    DOE PAGES

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...

    2015-08-19

    We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less

  9. Deposition of Size-Selected Cu Nanoparticles by Inert Gas Condensation

    PubMed Central

    2010-01-01

    Nanometer size-selected Cu clusters in the size range of 1–5 nm have been produced by a plasma-gas-condensation-type cluster deposition apparatus, which combines a grow-discharge sputtering with an inert gas condensation technique. With this method, by controlling the experimental conditions, it was possible to produce nanoparticles with a strict control in size. The structure and size of Cu nanoparticles were determined by mass spectroscopy and confirmed by atomic force microscopy (AFM) and scanning electron transmission microscopy (STEM) measurements. In order to preserve the structural and morphological properties, the energy of cluster impact was controlled; the energy of acceleration of the nanoparticles was in near values at 0.1 ev/atom for being in soft landing regime. From SEM measurements developed in STEM-HAADF mode, we found that nanoparticles are near sized to those values fixed experimentally also confirmed by AFM observations. The results are relevant, since it demonstrates that proper optimization of operation conditions can lead to desired cluster sizes as well as desired cluster size distributions. It was also demonstrated the efficiency of the method to obtain size-selected Cu clusters films, as a random stacking of nanometer-size crystallites assembly. The deposition of size-selected metal clusters represents a novel method of preparing Cu nanostructures, with high potential in optical and catalytic applications. PMID:20652132

  10. Optimal economic order quantity for buyer-distributor-vendor supply chain with backlogging derived without derivatives

    NASA Astrophysics Data System (ADS)

    Teng, Jinn-Tsair; Cárdenas-Barrón, Leopoldo Eduardo; Lou, Kuo-Ren; Wee, Hui Ming

    2013-05-01

    In this article, we first complement an inappropriate mathematical error on the total cost in the previously published paper by Chung and Wee [2007, 'Optimal the Economic Lot Size of a Three-stage Supply Chain With Backlogging Derived Without Derivatives', European Journal of Operational Research, 183, 933-943] related to buyer-distributor-vendor three-stage supply chain with backlogging derived without derivatives. Then, an arithmetic-geometric inequality method is proposed not only to simplify the algebraic method of completing prefect squares, but also to complement their shortcomings. In addition, we provide a closed-form solution to integral number of deliveries for the distributor and the vendor without using complex derivatives. Furthermore, our method can solve many cases in which their method cannot, because they did not consider that a squared root of a negative number does not exist. Finally, we use some numerical examples to show that our proposed optimal solution is cheaper to operate than theirs.

  11. OPTIMIZATION AND VALIDATION OF HPLC METHOD FOR TETRAMETHRIN DETERMINATION IN HUMAN SHAMPOO FORMULATION.

    PubMed

    Zeric Stosic, Marina Z; Jaksic, Sandra M; Stojanov, Igor M; Apic, Jelena B; Ratajac, Radomir D

    2016-11-01

    High-performance liquid chromatography (HPLC) method with diode array detection (DAD) were optimized and validated for separation and determination of tetramethrin in an antiparasitic human shampoo. In order to optimize separation conditions, two different columns, different column oven temperatures, as well as mobile phase composition and ratio, were tested. Best separation was achieved on the Supelcosil TM LC-18- DB column (4.6 x 250 mm), particle size 5 jim, with mobile phase methanol : water (78 : 22, v/v) at a flow rate of 0.8 mL/min and at temperature of 30⁰C. The detection wavelength of the detector was set at 220 nm. Under the optimum chromatographic conditions, standard calibration curve was measured with good linearity [r2 = 0.9997]. Accuracy of the method defined as a mean recovery of tetramethrin from shampoo matrix was 100.09%. The advantages of this method are that it can easily be used for the routine analysis of drug tetramethrin in pharmaceutical formulas and in all pharmaceutical researches involving tetramethrin.

  12. Formulation and Evaluation of Optimized Oxybenzone Microsponge Gel for Topical Delivery

    PubMed Central

    Pawar, Atmaram P.; Gholap, Aditya P.; Kuchekar, Ashwin B.; Bothiraja, C.; Mali, Ashwin J.

    2015-01-01

    Background. Oxybenzone, a broad spectrum sunscreen agent widely used in the form of lotion and cream, has been reported to cause skin irritation, dermatitis, and systemic absorption. Aim. The objective of the present study was to formulate oxybenzone loaded microsponge gel for enhanced sun protection factor with reduced toxicity. Material and Method. Microsponge for topical delivery of oxybenzone was successfully prepared by quasiemulsion solvent diffusion method. The effects of ethyl cellulose and dichloromethane were optimized by the 32 factorial design. The optimized microsponges were dispersed into the hydrogel and further evaluated. Results. The microsponges were spherical with pore size in the range of 0.10–0.22 µm. The optimized formulation possesses the particle size and entrapment efficiency of 72 ± 0.77 µm and 96.9 ± 0.52%, respectively. The microsponge gel showed the controlled release and was nonirritant to the rat skin. In creep recovery test it had shown highest recovery indicating elasticity. The controlled release of oxybenzone from microsponge and barrier effect of gel result in prolonged retention of oxybenzone with reduced permeation activity. Conclusion. Evaluation study revealed remarkable and enhanced topical retention of oxybenzone for prolonged period of time. It also showed the enhanced sun protection factor compared to the marketed preparation with reduced irritation and toxicity. PMID:25789176

  13. Optimization of Process Parameters of Pulsed Electro Deposition Technique for Nanocrystalline Nickel Coating Using Gray Relational Analysis (GRA)

    NASA Astrophysics Data System (ADS)

    Venkatesh, C.; Sundara Moorthy, N.; Venkatesan, R.; Aswinprasad, V.

    The moving parts of any mechanism and machine parts are always subjected to a significant wear due to the development of friction. It is an utmost important aspect to address the wear problems in present environment. But the complexity goes on increasing to replace the worn out parts if they are very precise. Technology advancement in surface engineering ensures the minimum surface wear with the introduction of polycrystalline nano nickel coating. The enhanced tribological property of the nano nickel coating was achieved by the development of grain size and hardness of the surface. In this study, it has been decided to focus on the optimized parameters of the pulsed electro deposition to develop such a coating. Taguchi’s method coupled gray relational analysis was employed by considering the pulse frequency, average current density and duty cycle as the chief process parameters. The grain size and hardness were considered as responses. Totally, nine experiments were conducted as per L9 design of experiment. Additionally, response graph method has been applied to determine the most significant parameter to influence both the responses. In order to improve the degree of validation, confirmation test and predicted gray grade were carried out with the optimized parameters. It has been observed that there was significant improvement in gray grade for the optimal parameters.

  14. Numerical study of ultra-low field nuclear magnetic resonance relaxometry utilizing a single axis magnetometer for signal detection.

    PubMed

    Vogel, Michael W; Vegh, Viktor; Reutens, David C

    2013-05-01

    This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.

  15. Novel Dynamic Framed-Slotted ALOHA Using Litmus Slots in RFID Systems

    NASA Astrophysics Data System (ADS)

    Yim, Soon-Bin; Park, Jongho; Lee, Tae-Jin

    Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular protocols to resolve tag collisions in RFID systems. In DFSA, it is widely known that the optimal performance is achieved when the frame size is equal to the number of tags. So, a reader dynamically adjusts the next frame size according to the current number of tags. Thus it is important to estimate the number of tags exactly. In this paper, we propose a novel tag estimation and identification method using litmus (test) slots for DFSA. We compare the performance of the proposed method with those of existing methods by analysis. We conduct simulations and show that our scheme improves the speed of tag identification.

  16. Optimization of gold ore Sumbawa separation using gravity method: Shaking table

    NASA Astrophysics Data System (ADS)

    Ferdana, Achmad Dhaefi; Petrus, Himawan Tri Bayu Murti; Bendiyasa, I. Made; Prijambada, Irfan Dwidya; Hamada, Fumio; Sachiko, Takahi

    2018-04-01

    Most of artisanal small gold mining in Indonesia has been using amalgamation method, which caused negative impact to the environment around ore processing area due to the usage of mercury. One of the more environmental-friendly method for gold processing is gravity method. Shaking table is one of separation equipment of gravity method used to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as rotational speed shaking, particle size and deck slope. In this research, the range of rotational speed shaking was between 100 rpm and 200 rpm, the particle size was between -100 + 200 mesh and -200 + 300 mesh and deck slope was between 3° and 7°. Gold concentration in concentrate was measured by EDX. The result shows that the optimum condition is obtained at a shaking speed of 200 rpm, with a slope of 7° and particle size of -100 + 200 mesh.

  17. Edge roughness evaluation method for quantifying at-size beam blur in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Masaki; Moriya, Shigeru

    2000-07-01

    At-size beam blur at any given pattern size of an electron beam (EB) direct writer, HL800D, was quantified using the new edge roughness evaluation (ERE) method to optimize the electron-optical system. We characterized the two-dimensional beam-blur dependence on the electron deflection length of the EB direct writer. The results indicate that the beam blur ranged from 45 nm to 56 nm in a deflection field 2520 micrometer square. The new ERE method is based on the experimental finding that line edge roughness of a resist pattern is inversely proportional to the slope of the Gaussian-distributed quasi-beam-profile (QBP) proposed in this paper. The QBP includes effects of the beam blur, electron forward scattering, acid diffusion in chemically amplified resist (CAR), the development process, and aperture mask quality. The application the ERE method to investigating the beam-blur fluctuation demonstrates the validity of the ERE method in characterizing the electron-optical column conditions of EB projections such as SCALPEL and PREVAIL.

  18. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    PubMed

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  19. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration

    2014-03-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.

  20. Methods for determining the preatmospheric dimensions of meteorites

    NASA Astrophysics Data System (ADS)

    Ustinova, G. K.; Alekseev, V. A.; Lavrukhina, A. K.

    1988-10-01

    Methods are proposed for the determination of the preatmospheric size of a meteorite on the basis of data on its cosmogenic radionuclides. Optimal conditions for the application of each of these methods are presented together with the demonstration of their effectiveness. Estimates of relative dimensions determined by these methods are presented for the Harleton, St. Severin, Lost City, Peace River, Pribram, Dhajala, Innisfree, Bruderheim, Ehole, and Gorlovka chondrites and for the Iardymly, Boguslavka, Treysa, and Sikhote-Alin' iron meteorites.

  1. Dynamic modeling of photothermal interactions for laser-induced interstitial thermotherapy: parameter sensitivity analysis.

    PubMed

    Jiang, S C; Zhang, X X

    2005-12-01

    A two-dimensional model was developed to model the effects of dynamic changes in the physical properties on tissue temperature and damage to simulate laser-induced interstitial thermotherapy (LITT) treatment procedures with temperature monitoring. A modified Monte Carlo method was used to simulate photon transport in the tissue in the non-uniform optical property field with the finite volume method used to solve the Pennes bioheat equation to calculate the temperature distribution and the Arrhenius equation used to predict the thermal damage extent. The laser light transport and the heat transfer as well as the damage accumulation were calculated iteratively at each time step. The influences of different laser sources, different applicator sizes, and different irradiation modes on the final damage volume were analyzed to optimize the LITT treatment. The numerical results showed that damage volume was the smallest for the 1,064-nm laser, with much larger, similar damage volumes for the 980- and 850-nm lasers at normal blood perfusion rates. The damage volume was the largest for the 1,064-nm laser with significantly smaller, similar damage volumes for the 980- and 850-nm lasers with temporally interrupted blood perfusion. The numerical results also showed that the variations in applicator sizes, laser powers, heating durations and temperature monitoring ranges significantly affected the shapes and sizes of the thermal damage zones. The shapes and sizes of the thermal damage zones can be optimized by selecting different applicator sizes, laser powers, heating duration times, temperature monitoring ranges, etc.

  2. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  4. Optimal size of pterygium excision for limbal conjunctival autograft using fibrin glue in primary pterygia.

    PubMed

    Hwang, Ho Sik; Cho, Kyong Jin; Rand, Gabriel; Chuck, Roy S; Kwon, Ji Won

    2018-06-07

    In our study we describe a method that optimizes size of excision and autografting for primary pterygia along with the use of intraoperative MMC and fibrin glue. Our objective is to propose a simple, optimizedpterygium surgical technique with excellent aesthetic outcomes and low rates of recurrence and otheradverse events. Retrospective chart review of 78 consecutive patients with stage III primary pterygia who underwent an optimal excision technique by three experienced surgeons. The technique consisted of removal of the pterygium head, excision of the pterygium body and Tenon's layer limited in proportion to the length of the head, application of intraoperative mitomycin C to the defect, harvest of superior bulbar limbal conjunctival graft, adherence of graft with fibrin glue. Outcomes included operative time, follow up period, pterygium recurrence, occurrences of incorrectly sized grafts, and other complications. All patients were followed up for more than a year. Of the 78 patients, there were 2 cases of pterygium recurrence (2.6%). There was one case of wound dehiscence secondary to small-sized donor conjunctivaand one case of over-sized donor conjunctiva, neither of which required surgical correction. There were no toxic complications associated with the use of mitomycin C. Correlating the excision of the pterygium body and underlying Tenon's layer to the length of the pterygium head, along with the use intraoperative mitomycin C, limbal conjunctival autografting, and fibrin adhesionresulted in excellent outcomes with a low rate of recurrence for primary pterygia.

  5. Path Planning Method in Multi-obstacle Marine Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Jinpeng; Sun, Hanxv

    2017-12-01

    In this paper, an improved algorithm for particle swarm optimization is proposed for the application of underwater robot in the complex marine environment. Not only did consider to avoid obstacles when path planning, but also considered the current direction and the size effect on the performance of the robot dynamics. The algorithm uses the trunk binary tree structure to construct the path search space and A * heuristic search method is used in the search space to find a evaluation standard path. Then the particle swarm algorithm to optimize the path by adjusting evaluation function, which makes the underwater robot in the current navigation easier to control, and consume less energy.

  6. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    NASA Astrophysics Data System (ADS)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  7. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance

    PubMed Central

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; EP Mundhofir, Farmaditya; MH Faradz, Sultana; Hisatome, Ichiro

    2017-01-01

    Background High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Methods Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Results Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100–400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. Conclusion In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1. PMID:28331418

  8. Optimized zein nanospheres for improved oral bioavailability of atorvastatin

    PubMed Central

    Hashem, Fahima M; Al-Sawahli, Majid M; Nasr, Mohamed; Ahmed, Osama AA

    2015-01-01

    Background This work focuses on the development of atorvastatin utilizing zein, a natural, safe, and biocompatible polymer, as a nanosized formulation in order to overcome the poor oral bioavailability (12%) of the drug. Methods Twelve experimental runs of atorvastatin–zein nanosphere formula were formulated by a liquid–liquid phase separation method according to custom fractional factorial design to optimize the formulation variables. The factors studied were: weight % of zein to atorvastatin (X1), pH (X2), and stirring time (X3). Levels for each formulation variable were designed. The selected dependent variables were: mean particle size (Y1), zeta potential (Y2), drug loading efficiency (Y3), drug encapsulation efficiency (Y4), and yield (Y5). The optimized formulation was assayed for compatibility using an X-ray diffraction assay. In vitro diffusion of the optimized formulation was carried out. A pharmacokinetic study was also done to compare the plasma profile of the atorvastatin–zein nanosphere formulation versus atorvastatin oral suspension and the commercially available tablet. Results The optimized atorvastatin–zein formulation had a mean particle size of 183 nm, a loading efficiency of 14.86%, and an encapsulation efficiency of 29.71%. The in vitro dissolution assay displayed an initial burst effect, with a cumulative amount of atorvastatin released of 41.76% and 82.3% after 12 and 48 hours, respectively. In Wistar albino rats, the bioavailability of atorvastatin from the optimized atorvastatin–zein formulation was 3-fold greater than that from the atorvastatin suspension and the commercially available tablet. Conclusion The atorvastatin–zein nanosphere formulation improved the oral delivery and pharmacokinetic profile of atorvastatin by enhancing its oral bioavailability. PMID:26150716

  9. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  10. Comparative study of soft thermal printing and lamination of dry thick photoresist films for the uniform fabrication of polymer MOEMS on small-sized samples

    NASA Astrophysics Data System (ADS)

    Abada, S.; Salvi, L.; Courson, R.; Daran, E.; Reig, B.; Doucet, J. B.; Camps, T.; Bardinal, V.

    2017-05-01

    A method called ‘soft thermal printing’ (STP) was developed to ensure the optimal transfer of 50 µm-thick dry epoxy resist films (DF-1050) on small-sized samples. The aim was the uniform fabrication of high aspect ratio polymer-based MOEMS (micro-optical-electrical-mechanical system) on small and/or fragile samples, such as GaAs. The printing conditions were optimized, and the resulting thickness uniformity profiles were compared to those obtained via lamination and SU-8 standard spin-coating. Under the best conditions tested, STP and lamination produced similar results, with a maximum deviation to the central thickness of 3% along the sample surface, compared to greater than 40% for SU-8 spin-coating. Both methods were successfully applied to the collective fabrication of DF1050-based MOEMS designed for the dynamic focusing of VCSELs (vertical-cavity surface-emitting lasers). Similar, efficient electro-thermo-mechanical behaviour was obtained in both cases.

  11. "Optimal" Size and Schooling: A Relative Concept.

    ERIC Educational Resources Information Center

    Swanson, Austin D.

    Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…

  12. The Optimized Fabrication of Nanobubbles as Ultrasound Contrast Agents for Tumor Imaging.

    PubMed

    Cai, Wen Bin; Yang, Heng Li; Zhang, Jian; Yin, Ji Kai; Yang, Yi Lin; Yuan, Li Jun; Zhang, Li; Duan, Yun You

    2015-09-03

    Nanobubbles, which have the potential for ultrasonic targeted imaging and treatment in tumors, have been a research focus in recent years. With the current methods, however, the prepared uniformly sized nanobubbles either undergo post-formulation manipulation, such as centrifugation, after the mixture of microbubbles and nanobubbles, or require the addition of amphiphilic surfactants. These processes influence the nanobubble stability, possibly create material waste, and complicate the preparation process. In the present work, we directly prepared uniformly sized nanobubbles by modulating the thickness of a phospholipid film without the purification processes or the addition of amphiphilic surfactants. The fabricated nanobubbles from the optimal phospholipid film thickness exhibited optimal physical characteristics, such as uniform bubble size, good stability, and low toxicity. We also evaluated the enhanced imaging ability of the nanobubbles both in vitro and in vivo. The in vivo enhancement intensity in the tumor was stronger than that of SonoVue after injection (UCA; 2 min: 162.47 ± 8.94 dB vs. 132.11 ± 5.16 dB, P < 0.01; 5 min: 128.38.47 ± 5.06 dB vs. 68.24 ± 2.07 dB, P < 0.01). Thus, the optimal phospholipid film thickness can lead to nanobubbles that are effective for tumor imaging.

  13. The Optimized Fabrication of Nanobubbles as Ultrasound Contrast Agents for Tumor Imaging

    PubMed Central

    Cai, Wen Bin; Yang, Heng Li; Zhang, Jian; Yin, Ji Kai; Yang, Yi Lin; Yuan, Li Jun; Zhang, Li; Duan, Yun You

    2015-01-01

    Nanobubbles, which have the potential for ultrasonic targeted imaging and treatment in tumors, have been a research focus in recent years. With the current methods, however, the prepared uniformly sized nanobubbles either undergo post-formulation manipulation, such as centrifugation, after the mixture of microbubbles and nanobubbles, or require the addition of amphiphilic surfactants. These processes influence the nanobubble stability, possibly create material waste, and complicate the preparation process. In the present work, we directly prepared uniformly sized nanobubbles by modulating the thickness of a phospholipid film without the purification processes or the addition of amphiphilic surfactants. The fabricated nanobubbles from the optimal phospholipid film thickness exhibited optimal physical characteristics, such as uniform bubble size, good stability, and low toxicity. We also evaluated the enhanced imaging ability of the nanobubbles both in vitro and in vivo. The in vivo enhancement intensity in the tumor was stronger than that of SonoVue after injection (UCA; 2 min: 162.47 ± 8.94 dB vs. 132.11 ± 5.16 dB, P < 0.01; 5 min: 128.38.47 ± 5.06 dB vs. 68.24 ± 2.07 dB, P < 0.01). Thus, the optimal phospholipid film thickness can lead to nanobubbles that are effective for tumor imaging. PMID:26333917

  14. Formulation, optimization and characterization of cationic polymeric nanoparticles of mast cell stabilizing agent using the Box-Behnken experimental design.

    PubMed

    Gajra, Balaram; Patel, Ravi R; Dalwadi, Chintan

    2016-01-01

    The present research work was intended to develop and optimize sustained release of biodegradable chitosan nanoparticles (CSNPs) as delivery vehicle for sodium cromoglicate (SCG) using the circumscribed Box-Behnken experimental design (BBD) and evaluate its potential for oral permeability enhancement. The 3-factor, 3-level BBD was employed to investigate the combined influence of formulation variables on particle size and entrapment efficiency (%EE) of SCG-CSNPs prepared by ionic gelation method. The generated polynomial equation was validated and desirability function was utilized for optimization. Optimized SCG-CSNPs were evaluated for physicochemical, morphological, in-vitro characterizations and permeability enhancement potential by ex-vivo and uptake study using CLSM. SCG-CSNPs exhibited particle size of 200.4 ± 4.06 nm and %EE of 62.68 ± 2.4% with unimodal size distribution having cationic, spherical, smooth surface. Physicochemical and in-vitro characterization revealed existence of SCG in amorphous form inside CSNPs without interaction and showed sustained release profile. Ex-vivo and uptake study showed the permeability enhancement potential of CSNPs. The developed SCG-CSNPs can be considered as promising delivery strategy with respect to improved permeability and sustained drug release, proving importance of CSNPs as potential oral delivery system for treatment of allergic rhinitis. Hence, further studies should be performed for establishing the pharmacokinetic potential of the CSNPs.

  15. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  16. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  17. Electrochemical synthesis and characterization of zinc oxalate nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamsipur, Mojtaba, E-mail: mshamsipur@yahoo.com; Roushani, Mahmoud; Department of Chemistry, Ilam University, Ilam

    2013-03-15

    Highlights: ► Synthesis of zinc oxalate nanoparticles via electrolysis of a zinc plate anode in sodium oxalate solutions. ► Design of a Taguchi orthogonal array to identify the optimal experimental conditions. ► Controlling the size and shape of particles via applied voltage and oxalate concentration. ► Characterization of zinc oxalate nanoparticles by SEM, UV–vis, FT-IR and TG–DTA. - Abstract: A rapid, clean and simple electrodeposition method was designed for the synthesis of zinc oxalate nanoparticles. Zinc oxalate nanoparticles in different size and shapes were electrodeposited by electrolysis of a zinc plate anode in sodium oxalate aqueous solutions. It was foundmore » that the size and shape of the product could be tuned by electrolysis voltage, oxalate ion concentration, and stirring rate of electrolyte solution. A Taguchi orthogonal array design was designed to identify the optimal experimental conditions. The morphological characterization of the product was carried out by scanning electron microscopy. UV–vis and FT-IR spectroscopies were also used to characterize the electrodeposited nanoparticles. The TG–DTA studies of the nanoparticles indicated that the main thermal degradation occurs in two steps over a temperature range of 350–430 °C. In contrast to the existing methods, the present study describes a process which can be easily scaled up for the production of nano-sized zinc oxalate powder.« less

  18. Location and Size Planning of Distributed Photovoltaic Generation in Distribution network System Based on K-means Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Siqi; Wang, Xiaorong; Wu, Junyong

    2018-01-01

    The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.

  19. Fabrication of polydimethylsiloxane (PDMS) nanofluidic chips with controllable channel size and spacing.

    PubMed

    Peng, Ran; Li, Dongqing

    2016-10-07

    The ability to create reproducible and inexpensive nanofluidic chips is essential to the fundamental research and applications of nanofluidics. This paper presents a novel and cost-effective method for fabricating a single nanochannel or multiple nanochannels in PDMS chips with controllable channel size and spacing. Single nanocracks or nanocrack arrays, positioned by artificial defects, are first generated on a polystyrene surface with controllable size and spacing by a solvent-induced method. Two sets of optimal working parameters are developed to replicate the nanocracks onto the polymer layers to form the nanochannel molds. The nanochannel molds are used to make the bi-layer PDMS microchannel-nanochannel chips by simple soft lithography. An alignment system is developed for bonding the nanofluidic chips under an optical microscope. Using this method, high quality PDMS nanofluidic chips with a single nanochannel or multiple nanochannels of sub-100 nm width and height and centimeter length can be obtained with high repeatability.

  20. On the use of big-bang method to generate low-energy structures of atomic clusters modeled with pair potentials of different ranges.

    PubMed

    Marques, J M C; Pais, A A C C; Abreu, P E

    2012-02-05

    The efficiency of the so-called big-bang method for the optimization of atomic clusters is analysed in detail for Morse pair potentials with different ranges; here, we have used Morse potentials with four different ranges, from long- ρ = 3) to short-ranged ρ = 14) interactions. Specifically, we study the efficacy of the method in discovering low-energy structures, including the putative global minimum, as a function of the potential range and the cluster size. A new global minimum structure for long-ranged ρ = 3) Morse potential at the cluster size of n= 240 is reported. The present results are useful to assess the maximum cluster size for each type of interaction where the global minimum can be discovered with a limited number of big-bang trials. Copyright © 2011 Wiley Periodicals, Inc.

  1. Modeling and Simulation of A Microchannel Cooling System for Vitrification of Cells and Tissues.

    PubMed

    Wang, Y; Zhou, X M; Jiang, C J; Yu, Y T

    The microchannel heat exchange system has several advantages and can be used to enhance heat transfer for vitrification. To evaluate the microchannel cooling method and to analyze the effects of key parameters such as channel structure, flow rate and sample size. A computational flow dynamics model is applied to study the two-phase flow in microchannels and its related heat transfer process. The fluid-solid coupling problem is solved with a whole field solution method (i.e., flow profile in channels and temperature distribution in the system being simulated simultaneously). Simulation indicates that a cooling rate >10 4 C/min is easily achievable using the microchannel method with the high flow rate for a board range of sample sizes. Channel size and material used have significant impact on cooling performance. Computational flow dynamics is useful for optimizing the design and operation of the microchannel system.

  2. Size-exclusion chromatography (HPLC-SEC) technique optimization by simplex method to estimate molecular weight distribution of agave fructans.

    PubMed

    Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María

    2017-12-15

    Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimization of radiation shielding material aiming at compactness, lightweight, and low activation for a vehicle-mounted accelerator-driven D-T neutron source.

    PubMed

    Cai, Yao; Hu, Huasi; Lu, Shuangying; Jia, Qinggang

    2018-05-01

    To minimize the size and weight of a vehicle-mounted accelerator-driven D-T neutron source and protect workers from unnecessary irradiation after the equipment shutdown, a method to optimize radiation shielding material aiming at compactness, lightweight, and low activation for the fast neutrons was developed. The method employed genetic algorithm, combining MCNP and ORIGEN codes. A series of composite shielding material samples were obtained by the method step by step. The volume and weight needed to build a shield (assumed as a coaxial tapered cylinder) were adopted to compare the performance of the materials visually and conveniently. The results showed that the optimized materials have excellent performance in comparison with the conventional materials. The "MCNP6-ACT" method and the "rigorous two steps" (R2S) method were used to verify the activation grade of the shield irradiated by D-T neutrons. The types of radionuclide, the energy spectrum of corresponding decay gamma source, and the variation in decay gamma dose rate were also computed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  5. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    NASA Astrophysics Data System (ADS)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  6. A batch sliding window method for local singularity mapping and its application for geochemical anomaly identification

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang

    2016-05-01

    In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.

  7. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  8. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  9. [Calculating the optimum size of a hemodialysis unit based on infrastructure potential].

    PubMed

    Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis

    2010-01-01

    To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.

  10. Design optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos

    1991-01-01

    The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.

  11. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  12. Optimal marker-assisted selection to increase the effective size of small populations.

    PubMed

    Wang, J

    2001-02-01

    An approach to the optimal utilization of marker and pedigree information in minimizing the rates of inbreeding and genetic drift at the average locus of the genome (not just the marked loci) in a small diploid population is proposed, and its efficiency is investigated by stochastic simulations. The approach is based on estimating the expected pedigree of each chromosome by using marker and individual pedigree information and minimizing the average coancestry of selected chromosomes by quadratic integer programming. It is shown that the approach is much more effective and much less computer demanding in implementation than previous ones. For pigs with 10 offspring per mother genotyped for two markers (each with four alleles at equal initial frequency) per chromosome of 100 cM, the approach can increase the average effective size for the whole genome by approximately 40 and 55% if mating ratios (the number of females mated with a male) are 3 and 12, respectively, compared with the corresponding values obtained by optimizing between-family selection using pedigree information only. The efficiency of the marker-assisted selection method increases with increasing amount of marker information (number of markers per chromosome, heterozygosity per marker) and family size, but decreases with increasing genome size. For less prolific species, the approach is still effective if the mating ratio is large so that a high marker-assisted selection pressure on the rarer sex can be maintained.

  13. Grain Size and Phase Purity Characterization of U 3Si 2 Pellet Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoggan, Rita E.; Tolman, Kevin R.; Cappia, Fabiola

    Characterization of U 3Si 2 fresh fuel pellets is important for quality assurance and validation of the finished product. Grain size measurement methods, phase identification methods using scanning electron microscopes equipped with energy dispersive spectroscopy and x-ray diffraction, and phase quantification methods via image analysis have been developed and implemented on U 3Si 2 pellet samples. A wide variety of samples have been characterized including representative pellets from an initial irradiation experiment, and samples produced using optimized methods to enhance phase purity from an extended fabrication effort. The average grain size for initial pellets was between 16 and 18 µm.more » The typical average grain size for pellets from the extended fabrication was between 20 and 30 µm with some samples exhibiting irregular grain growth. Pellets from the latter half of extended fabrication had a bimodal grain size distribution consisting of coarsened grains (>80 µm) surrounded by the typical (20-30 µm) grain structure around the surface. Phases identified in initial uranium silicide pellets included: U 3Si 2 as the main phase composing about 80 vol. %, Si rich phases (USi and U 5Si 4) composing about 13 vol. %, and UO 2 composing about 5 vol. %. Initial batches from the extended U 3Si 2 pellet fabrication had similar phases and phase quantities. The latter half of the extended fabrication pellet batches did not contain Si rich phases, and had between 1-5% UO 2: achieving U 3Si 2 phase purity between 95 vol. % and 98 vol. % U 3Si 2. The amount of UO 2 in sintered U 3Si 2 pellets is correlated to the length of time between U 3Si 2 powder fabrication and pellet formation. These measurements provide information necessary to optimize fabrication efforts and a baseline for future work on this fuel compound.« less

  14. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Ding, X; Hu, Y

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). Themore » root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan robustness and the impact of interplay effect than spot size alone. This research was supported by the National Cancer Institute Career Developmental Award K25CA168984, by the Fraternal Order of Eagles Cancer Research Fund Career Development Award, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, by Mayo Arizona State University Seed Grant, and by The Kemper Marley Foundation.« less

  15. Extraction optimization and UHPLC method development for determination of the 20-hydroxyecdysone in Sida tuberculata leaves.

    PubMed

    da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro

    2018-04-01

    Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"

    NASA Astrophysics Data System (ADS)

    Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.

    2018-03-01

    In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.

  17. Biophysical characterization of influenza virus subpopulations using field flow fractionation and multiangle light scattering: correlation of particle counts, size distribution and infectivity.

    PubMed

    Wei, Ziping; McEvoy, Matt; Razinkov, Vladimir; Polozova, Alla; Li, Elizabeth; Casas-Finet, Jose; Tous, Guillermo I; Balu, Palani; Pan, Alfred A; Mehta, Harshvardhan; Schenerman, Mark A

    2007-09-01

    Adequate biophysical characterization of influenza virions is important for vaccine development. The influenza virus vaccines are produced from the allantoic fluid of developing chicken embryos. The process of viral replication produces a heterogeneous mixture of infectious and non-infectious viral particles with varying states of aggregation. The study of the relative distribution and behavior of different subpopulations and their inter-correlation can assist in the development of a robust process for a live virus vaccine. This report describes a field flow fractionation and multiangle light scattering (FFF-MALS) method optimized for the analysis of size distribution and total particle counts. The FFF-MALS method was compared with several other methods such as transmission electron microscopy (TEM), atomic force microscopy (AFM), size exclusion chromatography followed by MALS (SEC-MALS), quantitative reverse transcription polymerase chain reaction (RT Q-PCR), median tissue culture dose (TCID(50)), and the fluorescent focus assay (FFA). The correlation between the various methods for determining total particle counts, infectivity and size distribution is reported. The pros and cons of each of the analytical methods are discussed.

  18. Design and synthesis of a novel multifunctional stabilizer for highly stable uc(dl)-tetrahydropalmatine nanosuspensions and in vitro study

    NASA Astrophysics Data System (ADS)

    Yan, Beibei; Wang, Yancai; Wang, Lulu; Zhou, Yuqi; Shang, Xueyun; Zhao, Juan; Liu, Yangyang; Du, Juan

    2018-05-01

    The present study aimed to prepare stable uc(dl)-tetrahydropalmatine (uc(dl)-THP) nanosuspensions of optimized formulation with PEGylated chitosan as a multifunctional stabilizer using the antisolvent precipitation method. A central composite design project of three factors and five-level full factorial (53) was applied to design the experimental program, and response surface methodology analysis was used to optimize the experimental conditions. The effects of critical influencing factors such as PEGylated chitosan concentration, operational temperature, and ultrasonic energy on particle size and zeta potential were investigated. Under the optimization nanosuspension formulation, the particle size was 269 nm and zeta potential was at 37.4 mV. Also, the uc(dl)-THP nanosuspensions maintained good physical stability after 2 months, indicating the potential ability of the multifunctional stabilizer for stable nanosuspension formulation. Hence, the present findings indicated that PEGylated chitosan could be used as the ideal stabilizer to form a physically stable nanosuspension formulation.

  19. X-ray optics simulation and beamline design for the APS upgrade

    NASA Astrophysics Data System (ADS)

    Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean

    2017-08-01

    The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.

  20. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning

    PubMed Central

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630

  1. Automated geometric optimization for robotic HIFU treatment of liver tumors.

    PubMed

    Williamson, Tom; Everitt, Scott; Chauhan, Sunita

    2018-05-01

    High intensity focused ultrasound (HIFU) represents a non-invasive method for the destruction of cancerous tissue within the body. Heating of targeted tissue by focused ultrasound transducers results in the creation of ellipsoidal lesions at the target site, the locations of which can have a significant impact on treatment outcomes. Towards this end, this work describes a method for the optimization of lesion positions within arbitrary tumors, with specific anatomical constraints. A force-based optimization framework was extended to the case of arbitrary tumor position and constrained orientation. Analysis of the approximate reachable treatment volume for the specific case of treatment of liver tumors was performed based on four transducer configurations and constraint conditions derived. Evaluation was completed utilizing simplified spherical and ellipsoidal tumor models and randomly generated tumor volumes. The total volume treated, lesion overlap and healthy tissue ablated was evaluated. Two evaluation scenarios were defined and optimized treatment plans assessed. The optimization framework resulted in improvements of up to 10% in tumor volume treated, and reductions of up to 20% in healthy tissue ablated as compared to the standard lesion rastering approach. Generation of optimized plans proved feasible for both sub- and intercostally located tumors. This work describes an optimized method for the planning of lesion positions during HIFU treatment of liver tumors. The approach allows the determination of optimal lesion locations and orientations, and can be applied to arbitrary tumor shapes and sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Multi-objective shape optimization of plate structure under stress criteria based on sub-structured mixed FEM and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis

    2015-07-01

    This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.

  3. A stochastic method for stand-alone photovoltaic system sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio

    Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less

  4. Performance, optimization, and latest development of the SRI family of rotary cryocoolers

    NASA Astrophysics Data System (ADS)

    Dovrtel, Klemen; Megušar, Franc

    2017-05-01

    In this paper the SRI family of Le-tehnika rotary cryocoolers is presented (SRI401, SRI423/SRI421 and SRI474). The Stirling coolers cooling power range starts from 0.25W to 0.75W at 77K with available temperature range from 60K to 150K and are fitted to typical dewar detector sizes and powers supply voltages. The DDCA performance optimizing procedure is presented. The procedure includes cooler steady state performance mapping and optimization and cooldown optimization. The current cryogenic performance status and reliability evaluation method and figures are presented on the existing and new units. The latest improved SRI401 demonstrated MTTF close to 25'000 hours and the test is still on going.

  5. Design and evaluation of liposomal formulation of pilocarpine nitrate.

    PubMed

    Rathod, S; Deshpande, S G

    2010-03-01

    Prolonged release drug delivery system of pilocarpine nitrate was made by optimizing thin layer film hydration method. Egg phosphatidylcholine and cholesterol were used to make multilamellar vesicles. Effects of charges over the vesicles were studied by incorporating dicetylphosphate and stearylamine. Various factors, which may affect the size, shape, encapsulation efficiency and release rate, were studied. Liposomes in the size range 0.2 to 1 µm were obtained by optimizing the process. Encapsulation efficiency of neutral, positive and negatively charged liposomes were found to be 32.5, 35.4 and 34.2 percent, respectively. In vitro drug release rate was studied on specially designed model. Biological response in terms of reduction in intraocular pressure was observed on rabbit eyes. Pilocarpine nitrate liposomes were lyophilized and stability studies were conducted.

  6. The determination of total burn surface area: How much difference?

    PubMed

    Giretzlehner, M; Dirnberger, J; Owen, R; Haller, H L; Lumenta, D B; Kamolz, L-P

    2013-09-01

    Burn depth and burn size are crucial determinants for assessing patients suffering from burns. Therefore, a correct evaluation of these factors is optimal for adapting the appropriate treatment in modern burn care. Burn surface assessment is subject to considerable differences among clinicians. This work investigated the accuracy among experts based on conventional surface estimation methods (e.g. "Rule of Palm", "Rule of Nines" or "Lund-Browder Chart"). The estimation results were compared to a computer-based evaluation method. Survey data was collected during one national and one international burn conference. The poll confirmed deviations of burn depth/size estimates of up to 62% in relation to the mean value of all participants. In comparison to the computer-based method, overestimation of up to 161% was found. We suggest introducing improved methods for burn depth/size assessment in clinical routine in order to efficiently allocate and distribute the available resources for practicing burn care. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  7. Design of gefitinib-loaded poly (l-lactic acid) microspheres via a supercritical anti-solvent process for dry powder inhalation.

    PubMed

    Lin, Qing; Liu, Guijin; Zhao, Ziyi; Wei, Dongwei; Pang, Jiafeng; Jiang, Yanbin

    2017-10-30

    To develop a safer, more stable and potent formulation of gefitinib (GFB), micro-spheres of GFB encapsulated into poly (l-lactic acid) (PLLA) have been prepared by supercritical anti-solvent (SAS) technology in this study. Operating factors were optimized using a selected OA 16 (4 5 ) orthogonal array design, and the properties of the raw material and SAS processed samples were characterized by different methods The results show that the GFB-loaded PLLA particles prepared were spherical, having a smaller and narrower particle size compared with raw GFB. The optimal GFB-loaded PLLA sample was prepared with less aggregation, highest GFB loading (15.82%) and smaller size (D 50 =2.48μm, which meets the size of dry powder inhalers). The results of XRD and DSC indicate that GFB is encapsulated into PLLA matrix in a polymorphic form different from raw GFB. FT-IR results show that the chemical structure of GFB does not change after the SAS process. The results of in vitro release show that the optimal sample release was slower compared with raw GFB particles. Moreover, the results of in vitro anti-cancer trials show that the optimal sample had a higher cytotoxicity than raw GFB. After blending with sieved lactose, the flowability and aerosolization performance of the optimal sample for DPI were improved, with angle of repose, emitted dose and fine particles fractions from 38.4° to 23°, 63.21% to >90%, 23.37% to >30%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Droplets size evolution of dispersion in a stirred tank

    NASA Astrophysics Data System (ADS)

    Kysela, Bohus; Konfrst, Jiri; Chara, Zdenek; Sulc, Radek; Jasikova, Darina

    2018-06-01

    Dispersion of two immiscible liquids is commonly used in chemical industry as wall as in metallurgical industry e. g. extraction process. The governing property is droplet size distribution. The droplet sizes are given by the physical properties of both liquids and flow properties inside a stirred tank. The first investigation stage is focused on in-situ droplet size measurement using image analysis and optimizing of the evaluation method to achieve maximal result reproducibility. The obtained experimental results are compared with multiphase flow simulation based on Euler-Euler approach combined with PBM (Population Balance Modelling). The population balance model was, in that specific case, simplified with assumption of pure breakage of droplets.

  9. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  10. Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.

    PubMed

    Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad

    2013-09-01

    We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.

  11. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  12. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  13. Strain-Based Damage Determination Using Finite Element Analysis for Structural Health Management

    NASA Technical Reports Server (NTRS)

    Hochhalter, Jacob D.; Krishnamurthy, Thiagaraja; Aguilo, Miguel A.

    2016-01-01

    A damage determination method is presented that relies on in-service strain sensor measurements. The method employs a gradient-based optimization procedure combined with the finite element method for solution to the forward problem. It is demonstrated that strains, measured at a limited number of sensors, can be used to accurately determine the location, size, and orientation of damage. Numerical examples are presented to demonstrate the general procedure. This work is motivated by the need to provide structural health management systems with a real-time damage characterization. The damage cases investigated herein are characteristic of point-source damage, which can attain critical size during flight. The procedure described can be used to provide prognosis tools with the current damage configuration.

  14. Extracting physicochemical features to predict protein secondary structure.

    PubMed

    Huang, Yin-Fu; Chen, Shu-Ying

    2013-01-01

    We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances.

  15. Extracting Physicochemical Features to Predict Protein Secondary Structure

    PubMed Central

    Chen, Shu-Ying

    2013-01-01

    We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances. PMID:23766688

  16. Transethosomal gels as carriers for the transdermal delivery of colchicine: statistical optimization, characterization, and ex vivo evaluation

    PubMed Central

    Abdulbaqi, Ibrahim M; Darwis, Yusrida; Assi, Reem Abou; Khan, Nurzalina Abdul Karim

    2018-01-01

    Introduction Colchicine is used for the treatment of gout, pseudo-gout, familial Mediterranean fever, and many other illnesses. Its oral administration is associated with poor bioavailability and severe gastrointestinal side effects. The drug is also known to have a low therapeutic index. Thus to overcome these drawbacks, the transdermal delivery of colchicine was investigated using transethosomal gels as potential carriers. Methods Colchicine-loaded transethosomes (TEs) were prepared by the cold method and statistically optimized using three sets of 24 factorial design experiments. The optimized formulations were incorporated into Carbopol 940® gel base. The prepared colchicine-loaded transethosomal gels were further characterized for vesicular size, dispersity, zeta potential, drug content, pH, viscosity, yield, rheological behavior, and ex vivo skin permeation through Sprague Dawley rats’ back skin. Results The results showed that the colchicine-loaded TEs had aspherical irregular shape, nanometric size range, and high entrapment efficiency. All the formulated gels exhibited non-Newtonian plastic flow without thixotropy. Colchicine-loaded transethosomal gels were able to significantly enhance the skin permeation parameters of the drug in comparison to the non-ethosomal gel. Conclusion These findings suggested that the transethosomal gels are promising carriers for the transdermal delivery of colchicine, providing an alternative route for drug administration. PMID:29670336

  17. Genome-Wide Chromosomal Targets of Oncogenic Transcription Factors

    DTIC Science & Technology

    2008-04-01

    axis. (a) Comparison between STAGE and ChIP-chip when the same sample was analyzed by both methods. The gray line indicates all predicted STAGE targets...numbers of single-hit tags (Y-axis) were plotted against the frequen- cies of those tags in the random ( gray bars) and experimental (black bars) tag...size of 500 bp gave an optimal separation between random and real data. Data shown is for a window size of 500 bp. The gray bars indicate log10 of the

  18. Low energy isomers of (H2O)25 from a hierarchical method based on Monte Carlo Temperature Basin Paving and Molecular Tailoring Approaches benchmarked by full MP2 calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahu, Nityananda; Gadre, Shridhar R.; Bandyopadhyay, Pradipta

    We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving (MCTBP) sampling of the cluster’s Potential Energy Surface (PES) with the Effective Fragment Potential (EFP), subsequent geometry optimization using the Molecular Tailoring fragmentation Approach (MTA) and final refinement at the second order Møller Plesset perturbation (MP2) level of theory. The MTA geometry optimizations usedmore » between 14 and 18 main fragments with maximum sizes between 11 and 14 water molecules and average size of 10 water molecules, whose energies and gradients were computed at the MP2 level. The MTA-MP2 optimized geometries were found to be quite close (within < 0.5 kcal/mol) to the ones obtained from the MP2 optimization of the whole cluster. The grafting of the MTA-MP2 energies yields electronic energies that are within < 5×10-4 a.u. from the MP2 results for the whole cluster while preserving their energy order. The MTA-MP2 method was also found to reproduce the MP2 harmonic vibrational frequencies in both the HOH bending and the OH stretching regions.« less

  19. A multi-product green supply chain under government supervision with price and demand uncertainty

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Ashkan; Zamani, Soma

    2018-05-01

    In this paper, a bi-level game-theoretic model is proposed to investigate the effects of governmental financial intervention on green supply chain. This problem is formulated as a bi-level program for a green supply chain that produces various products with different environmental pollution levels. The problem is also regard uncertainties in market demand and sale price of raw materials and products. The model is further transformed into a single-level nonlinear programming problem by replacing the lower-level optimization problem with its Karush-Kuhn-Tucker optimality conditions. Genetic algorithm is applied as a solution methodology to solve nonlinear programming model. Finally, to investigate the validity of the proposed method, the computational results obtained through genetic algorithm are compared with global optimal solution attained by enumerative method. Analytical results indicate that the proposed GA offers better solutions in large size problems. Also, we conclude that financial intervention by government consists of green taxation and subsidization is an effective method to stabilize green supply chain members' performance.

  20. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less

  1. Engineering two-wire optical antennas for near field enhancement

    NASA Astrophysics Data System (ADS)

    Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun

    2017-07-01

    We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.

  2. Planning and Scheduling for Fleets of Earth Observing Satellites

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

  3. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.

  4. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration

    2013-10-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.

  5. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  6. Mean-Field Description of Ionic Size Effects with Non-Uniform Ionic Sizes: A Numerical Approach

    PubMed Central

    Zhou, Shenggao; Wang, Zhongming; Li, Bo

    2013-01-01

    Ionic size effects are significant in many biological systems. Mean-field descriptions of such effects can be efficient but also challenging. When ionic sizes are different, explicit formulas in such descriptions are not available for the dependence of the ionic concentrations on the electrostatic potential, i.e., there is no explicit, Boltzmann type distributions. This work begins with a variational formulation of the continuum electrostatics of an ionic solution with such non-uniform ionic sizes as well as multiple ionic valences. An augmented Lagrange multiplier method is then developed and implemented to numerically solve the underlying constrained optimization problem. The method is shown to be accurate and efficient, and is applied to ionic systems with non-uniform ionic sizes such as the sodium chloride solution. Extensive numerical tests demonstrate that the mean-field model and numerical method capture qualitatively some significant ionic size effects, particularly those for multivalent ionic solutions, such as the stratification of multivalent counterions near a charged surface. The ionic valence-to-volume ratio is found to be the key physical parameter in the stratification of concentrations. All these are not well described by the classical Poisson–Boltzmann theory, or the generalized Poisson–Boltzmann theory that treats uniform ionic sizes. Finally, various issues such as the close packing, limitation of the continuum model, and generalization of this work to molecular solvation are discussed. PMID:21929014

  7. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  8. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  9. DMS cyclone separation processes for optimization of plastic wastes recycling and their implications.

    PubMed

    Gent, Malcolm Richard; Menendez, Mario; Toraño, Javier; Torno, Susana

    2011-06-01

    It is demonstrated that substantial reductions in plastics presently disposed of in landfills can be achieved by cyclone density media separation (DMS). In comparison with the size fraction of plastics presently processed by industrial density separations (generally 6.4 to 9.5 mm), cyclone DMS methods are demonstrated to effectively process a substantially greater range of particle sizes (from 0.5 up to 120 mm). The purities of plastic products and recoveries obtained with a single stage separation using a cylindrical cyclone are shown to attain virtually 100% purity and recoveries >99% for high-density fractions and >98% purity and recoveries were obtained for low-density products. Four alternative schemas of multi-stage separations are presented and analyzed as proposed methods to obtain total low- and high-density plastics fraction recoveries while maintaining near 100% purities. The results of preliminary tests of two of these show that the potential for processing product purities and recoveries >99.98% of both density fractions are indicated. A preliminary economic comparison of capital costs of DMS systems suggests cyclone DMS methods to be comparable with other DMS processes even if the high volume capacity for recycling operations of these is not optimized.

  10. Optimal reactive planning with security constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.

    1995-12-31

    The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less

  11. Modelling and optimization of semi-solid processing of 7075 Al alloy

    NASA Astrophysics Data System (ADS)

    Binesh, B.; Aghaie-Khafri, M.

    2017-09-01

    The new modified strain-induced melt activation (SIMA) process presented by Binesh and Aghaie-Khafri was optimized using a response surface methodology to improve the thixotropic characteristics of semi-solid 7075 alloy. The responses, namely the average grain size and the shape factor, were considered as functions of three independent input variables: effective strain, isothermal holding temperature and time. Mathematical models for the responses were developed using the regression analysis technique, and the adequacy of the models was validated by the analysis of variance method. The calculated results correlated fairly well with the experiments. It was found that all the first- and second-order terms of the independent parameters and the interactive terms of the effective strain and holding time were statistically significant for the responses. In order to simultaneously optimize the responses, the desirable values for the effective strain, holding temperature and time were predicted to be 5.1, 609 °C and 14 min, respectively, when employing the desirability function approach. Based on the optimization results, a significant improvement in the average grain size and shape factor of the semi-solid slurry prepared by the new modified SIMA process was observed.

  12. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  13. Analysis of the solar/wind resources in Southern Spain for optimal sizing of hybrid solar-wind power generation systems

    NASA Astrophysics Data System (ADS)

    Quesada-Ruiz, S.; Pozo-Vazquez, D.; Santos-Alamillos, F. J.; Lara-Fanego, V.; Ruiz-Arias, J. A.; Tovar-Pescador, J.

    2010-09-01

    A drawback common to the solar and wind energy systems is their unpredictable nature and dependence on weather and climate on a wide range of time scales. In addition, the variation of the energy output may not match with the time distribution of the load demand. This can partially be solved by the use of batteries for energy storage in stand-alone systems. The problem caused by the variable nature of the solar and wind resources can be partially overcome by the use of energy systems that uses both renewable resources in a combined manner, that is, hybrid wind-solar systems. Since both resources can show complementary characteristics in certain location, the independent use of solar or wind systems results in considerable over sizing of the batteries system compared to the use of hybrid solar-wind systems. Nevertheless, to the day, there is no single recognized method for properly sizing these hybrid wind-solar systems. In this work, we present a method for sizing wind-solar hybrid systems in southern Spain. The method is based on the analysis of the wind and solar resources on daily scale, particularly, its temporal complementary characteristics. The method aims to minimize the size of the energy storage systems, trying to provide the most reliable supply.

  14. Clustering methods for the optimization of atomic cluster structure

    NASA Astrophysics Data System (ADS)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  15. Spin Glass Patch Planting

    NASA Technical Reports Server (NTRS)

    Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.

    2016-01-01

    In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.

  16. Optimizing the design and in vitro evaluation of bioreactive glucose oxidase-microspheres for enhanced cytotoxicity against multidrug resistant breast cancer cells.

    PubMed

    Cheng, Ji; Liu, Qun; Shuhendler, Adam J; Rauth, Andrew M; Wu, Xiao Yu

    2015-06-01

    Glucose oxidase (GOX) encapsulated in alginate-chitosan microspheres (GOX-MS) was shown in our previous work to produce reactive oxygen species (ROS) in situ and exhibit anticancer effects in vitro and in vivo. The purpose of present work was to optimize the design and thus enhance the efficacy of GOX-MS against multidrug resistant (MDR) cancer cells. GOX-MS with different mean diameters of 4, 20 or 140 μm were prepared using an emulsification-internal gelation-adsorption-chitosan coating method with varying compositions and conditions. The GOX loading efficiency, loading level, relative bioactivity of GOX-MS, and GOX leakage were determined and optimal chitosan concentrations in the coating solution were identified. The influence of particle size on cellular uptake, ROS generation, cytotoxicity and their underlying mechanisms was investigated. At the same GOX dose and incubation time, smaller sized GOX-MS produced larger amounts of H2O2 in cell culture medium and greater cytotoxicity toward murine breast cancer MDR (EMT6/AR1.0) and wild type (EMT6/WT) cells. Fluorescence and confocal laser scanning microscopy revealed significant uptake of small sized (4 μm) GOX-MS by both MDR and WT cells, but no cellular uptake of large (140 μm) GOX-MS. The GOX-MS were equally effective in killing both MDR cells and WT cells. The cytotoxicity of the GOX formulations was positively correlated with membrane damage and lipid peroxidation. GOX-MS induced greater membrane damage and lipid peroxidation in MDR cells than the WT cells. These results suggest that the optimized, small micron-sized GOX-MS are highly effective against MDR breast cancer cells. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Standing wave design and optimization of a simulated moving bed chromatography for separation of xylobiose and xylose under the constraints on product concentration and pressure drop.

    PubMed

    Lee, Chung-Gi; Choi, Jae-Hwan; Park, Chanhun; Wang, Nien-Hwa Linda; Mun, Sungyong

    2017-12-08

    The feasibility of a simulated moving bed (SMB) technology for the continuous separation of high-purity xylobiose (X2) from the output of a β-xylosidase X1→X2 reaction has recently been confirmed. To ensure high economical efficiency of the X2 production method based on the use of xylose (X1) as a starting material, it is essential to accomplish the comprehensive optimization of the X2-separation SMB process in such a way that its X2 productivity can be maximized while maintaining the X2 product concentration from the SMB as high as possible in consideration of a subsequent lyophilization step. To address this issue, a suitable SMB optimization tool for the aforementioned task was prepared based on standing wave design theory. The prepared tool was then used to optimize the SMB operation parameters, column configuration, total column number, adsorbent particle size, and X2 yield while meeting the constraints on X2 purity, X2 product concentration, and pressure drop. The results showed that the use of a larger particle size caused the productivity to be limited by the constraint on X2 product concentration, and a maximum productivity was attained by choosing the particle size such that the effect of the X2-concentration limiting factor could be balanced with that of pressure-drop limiting factor. If the target level of X2 product concentration was elevated, higher productivity could be achieved by decreasing particle size, raising the level of X2 yield, and increasing the column number in the zones containing the front and rear of X2 solute band. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Optimization of the Critical Parameters of the Spherical Agglomeration Crystallization Method by the Application of the Quality by Design Approach.

    PubMed

    Gyulai, Orsolya; Kovács, Anita; Sovány, Tamás; Csóka, Ildikó; Aigner, Zoltán

    2018-04-20

    This research work presents the use of the Quality by Design (QbD) concept for optimization of the spherical agglomeration crystallization method in the case of the active agent, ambroxol hydrochloride (AMB HCl). AMB HCl spherical crystals were formulated by the spherical agglomeration method, which was applied as an antisolvent technique. Spherical crystals have good flowing properties, which makes the direct compression tableting method applicable. This means that the amount of additives used can be reduced and smaller tablets can be formed. For the risk assessment, LeanQbD Software was used. According to its results, four independent variables (mixing type and time, dT (temperature difference between solvent and antisolvent), and composition (solvent/antisolvent volume ratio)) and three dependent variables (mean particle size, aspect ratio, and roundness) were selected. Based on these, a 2⁻3 mixed-level factorial design was constructed, crystallization was accomplished, and the results were evaluated using Statistica for Windows 13 program. Product assay was performed and it was revealed that improvements in the mean particle size (from ~13 to ~200 µm), roundness (from ~2.4 to ~1.5), aspect ratio (from ~1.7 to ~1.4), and flow properties were observed while polymorphic transitions were avoided.

  19. Super-cool paints: optimizing composition with a modified four-flux model

    NASA Astrophysics Data System (ADS)

    Gali, Marc A.; Arnold, Matthew D.; Gentle, Angus R.; Smith, Geoffrey B.

    2017-09-01

    The scope for maximizing the albedo of a painted surface to produce low cost new and retro-fitted super-cool roofing is explored systematically. The aim is easy to apply, low cost paint formulations yielding albedos in the range 0.90 to 0.95. This requires raising the near-infrared (NIR) spectral reflectance into this range, while not reducing the more easily obtained high visible reflectance values. Our modified version of the four-flux method has enabled results on more complex composites. Key parameters to be optimized include; fill factors, particle size and material, using more than one mean size, thickness, substrate and binder materials. The model used is a variation of the classical four-flux method that solves the energy transfer problem through four balance differential equations. We use a different approach to the characteristic parameters to define the absorptance and scattering of the complete composite. This generalization allows extension to inclusion of size dispersion of the pigment particle and various binder resins, including those most commonly in use based on acrylics. Thus, the pigment scattering model has to take account of the matrix having loss in the NIR. A paint ranking index aimed specifically at separating paints with albedo above 0.80 is introduced representing the fraction of time at a sub-ambient temperature.

  20. Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.

    PubMed

    Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei

    2017-09-01

    Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.

  1. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design.

    PubMed

    Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O

    2014-11-01

    Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.

  2. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  3. Relationships of maternal body size and morphology with egg and clutch size in the diamondback terrapin, Malaclemys terrapin (Testudines: Emydidae)

    USGS Publications Warehouse

    Kern, Maximilian M.; Guzy, Jacquelyn C.; Lovich, Jeffrey E.; Gibbons, J. Whitfield; Dorcas, Michael E.

    2016-01-01

    Because resources are finite, female animals face trade-offs between the size and number of offspring they are able to produce during a single reproductive event. Optimal egg size (OES) theory predicts that any increase in resources allocated to reproduction should increase clutch size with minimal effects on egg size. Variations of OES predict that egg size should be optimized, although not necessarily constant across a population, because optimality is contingent on maternal phenotypes, such as body size and morphology, and recent environmental conditions. We examined the relationships among body size variables (pelvic aperture width, caudal gap height, and plastron length), clutch size, and egg width of diamondback terrapins from separate but proximate populations at Kiawah Island and Edisto Island, South Carolina. We found that terrapins do not meet some of the predictions of OES theory. Both populations exhibited greater variation in egg size among clutches than within, suggesting an absence of optimization except as it may relate to phenotype/habitat matching. We found that egg size appeared to be constrained by more than just pelvic aperture width in Kiawah terrapins but not in the Edisto population. Terrapins at Edisto appeared to exhibit osteokinesis in the caudal region of their shells, which may aid in the oviposition of large eggs.

  4. Alcohol Warning Label Awareness and Attention: A Multi-method Study.

    PubMed

    Pham, Cuong; Rundle-Thiele, Sharyn; Parkinson, Joy; Li, Shanshi

    2018-01-01

    Evaluation of alcohol warning labels requires careful consideration ensuring that research captures more than awareness given that labels may not be prominent enough to attract attention. This study investigates attention of current in market alcohol warning labels and examines whether attention can be enhanced through theoretically informed design. Attention scores obtained through self-report methods are compared to objective measures (eye-tracking). A multi-method experimental design was used delivering four conditions, namely control, colour, size and colour and size. The first study (n = 559) involved a self-report survey to measure attention. The second study (n = 87) utilized eye-tracking to measure fixation count and duration and time to first fixation. Analysis of Variance (ANOVA) was utilized. Eye-tracking identified that 60% of participants looked at the current in market alcohol warning label while 81% looked at the optimized design (larger and red). In line with observed attention self-reported attention increased for the optimized design. The current study casts doubt on dominant practices (largely self-report), which have been used to evaluate alcohol warning labels. Awareness cannot be used to assess warning label effectiveness in isolation in cases where attention does not occur 100% of the time. Mixed methods permit objective data collection methodologies to be triangulated with surveys to assess warning label effectiveness. Attention should be incorporated as a measure in warning label effectiveness evaluations. Colour and size changes to the existing Australian warning labels aided by theoretically informed design increased attention. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  5. Conceptual Design and Structural Optimization of NASA Environmentally Responsible Aviation (ERA) Hybrid Wing Body Aircraft

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2016-01-01

    Simultaneously achieving the fuel consumption and noise reduction goals set forth by NASA's Environmentally Responsible Aviation (ERA) project requires innovative and unconventional aircraft concepts. In response, advanced hybrid wing body (HWB) aircraft concepts have been proposed and analyzed as a means of meeting these objectives. For the current study, several HWB concepts were analyzed using the Hybrid wing body Conceptual Design and structural optimization (HCDstruct) analysis code. HCDstruct is a medium-fidelity finite element based conceptual design and structural optimization tool developed to fill the critical analysis gap existing between lower order structural sizing approaches and detailed, often finite element based sizing methods for HWB aircraft concepts. Whereas prior versions of the tool used a half-model approach in building the representative finite element model, a full wing-tip-to-wing-tip modeling capability was recently added to HCDstruct, which alleviated the symmetry constraints at the model centerline in place of a free-flying model and allowed for more realistic center body, aft body, and wing loading and trim response. The latest version of HCDstruct was applied to two ERA reference cases, including the Boeing Open Rotor Engine Integration On an HWB (OREIO) concept and the Boeing ERA-0009H1 concept, and results agreed favorably with detailed Boeing design data and related Flight Optimization System (FLOPS) analyses. Following these benchmark cases, HCDstruct was used to size NASA's ERA HWB concepts and to perform a related scaling study.

  6. Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.

    PubMed

    Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José

    2015-04-01

    d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Variable porosity of the pipeline embolization device in straight and curved vessels: a guide for optimal deployment strategy.

    PubMed

    Shapiro, M; Raz, E; Becske, T; Nelson, P K

    2014-04-01

    Low-porosity endoluminal devices for the treatment of intracranial aneurysms, also known as flow diverters, have been in experimental and clinical use for close to 10 years. Despite rigorous evidence of their safety and efficacy in well-controlled trials, a number of key factors concerning their use remain poorly defined. Among these, none has received more attention to date than the debate on how many devices are optimally required to achieve a safe, effective, and economical outcome. Additional, related questions concern device sizing relative to the parent artery and optimal method of deployment of the devices. While some or all of these issues may be ultimately answered on an empiric basis via subgroup analysis of growing treatment cohorts, we believe that careful in vitro examination of relevant device properties can also help guide its in vivo use. We conducted a number of benchtop experiments to investigate the varied porosity of Pipeline Embolization Devices deployed in a simulated range of parent vessel diameters and applied these results toward conceptualizing optimal treatment strategies of fusiform and wide-neck aneurysms. The results of our studies confirm a predictable parabolic variability in device porosity based on the respective comparative sizes of the device and recipient artery, as well as device curvature. Even modest oversizing leads to a significant increase in porosity. The experiments demonstrate various deleterious effects of device oversizing relative to the parent artery and provide strategies for addressing size mismatches when they are unavoidable.

  8. Solid lipid nanoparticles as vesicles for oral delivery of olmesartan medoxomil: formulation, optimization and in vivo evaluation.

    PubMed

    Nooli, Mounika; Chella, Naveen; Kulhari, Hitesh; Shastri, Nalini R; Sistla, Ramakrishna

    2017-04-01

    Olmesartan medoxomil (OLM) is an antihypertensive drug with low oral bioavailability (28%) resulting from poor aqueous solubility, presystemic metabolism and P-glycoprotein mediated efflux. The present investigation studies the role of lipid nanocarriers in enhancing the OLM bioavailability through oral delivery. Solid lipid nanoparticles (SLN) were prepared by solvent emulsion-evaporation method. Statistical tools like regression analysis and Pareto charts were used to detect the important factors effecting the formulations. Formulation and process parameters were then optimized using mean effect plot and contour plots. The formulations were characterized for particle size, size distribution, surface charge, percentage of drug entrapped in nanoparticles, drug-excipients interactions, powder X-ray diffraction analysis and drug release in vitro. The optimized formulation comprised glyceryl monostearate, soya phosphatidylcholine and Tween 80 as lipid, co-emulsifier and surfactant, respectively, with an average particle size of 100 nm, PDI 0.291, zeta potential of -23.4 mV and 78% entrapment efficiency. Pharmacokinetic evaluation in male Sprague Dawley rats revealed 2.32-fold enhancement in relative bioavailability of drug from SLN when compared to that of OLM plain drug on oral administration. In conclusion, SLN show promising approaches as a vehicle for oral delivery of drugs like OLM.

  9. Gram-scale fractionation of nanodiamonds by density gradient ultracentrifugation.

    PubMed

    Peng, Wei; Mahfouz, Remi; Pan, Jun; Hou, Yuanfang; Beaujuge, Pierre M; Bakr, Osman M

    2013-06-07

    Size is a defining characteristic of nanoparticles; it influences their optical and electronic properties as well as their interactions with molecules and macromolecules. Producing nanoparticles with narrow size distributions remains one of the main challenges to their utilization. At this time, the number of practical approaches to optimize the size distribution of nanoparticles in many interesting materials systems, including diamond nanocrystals, remains limited. Diamond nanocrystals synthesized by detonation protocols - so-called detonation nanodiamonds (DNDs) - are promising systems for drug delivery, photonics, and composites. DNDs are composed of primary particles with diameters mainly <10 nm and their aggregates (ca. 10-500 nm). Here, we introduce a large-scale approach to rate-zonal density gradient ultracentrifugation to obtain monodispersed fractions of nanoparticles in high yields. We use this method to fractionate a highly concentrated and stable aqueous solution of DNDs and to investigate the size distribution of various fractions by dynamic light scattering, analytical ultracentrifugation, transmission electron microscopy and powder X-ray diffraction. This fractionation method enabled us to separate gram-scale amounts of DNDs into several size ranges within a relatively short period of time. In addition, the high product yields obtained for each fraction allowed us to apply the fractionation method iteratively to a particular size range of particles and to collect various fractions of highly monodispersed primary particles. Our method paves the way for in-depth studies of the physical and optical properties, growth, and aggregation mechanism of DNDs. Applications requiring DNDs with specific particle or aggregate sizes are now within reach.

  10. Addressing the minimum fleet problem in on-demand urban mobility.

    PubMed

    Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C

    2018-05-01

    Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .

  11. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  12. Constituents of Quality of Life and Urban Size

    ERIC Educational Resources Information Center

    Royuela, Vicente; Surinach, Jordi

    2005-01-01

    Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…

  13. The most precise computations using Euler's method in standard floating-point arithmetic applied to modelling of biological systems.

    PubMed

    Kalinina, Elizabeth A

    2013-08-01

    The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Preparation and physicochemical of microemulsion based on macadamia nut oil

    NASA Astrophysics Data System (ADS)

    Tu, Xinghao; Chen, Hong; Du, Liqing

    2018-03-01

    The objective of the present work was to study the preparation, optimization and characteristic of nanostructured lipid carriers(NLCs) based on macadamia nut oil. NLC with various macadamia nut oil content were successfully prepared by an optimized microfluidization method using stearic acid as solid lipid and pluronic F68 as surfactant. As a result, NLC with particle size about 286nm were obtained, and the polydispersity index(PI) of all developed NLC were below 0.2 which indicate a narrow size distribution. Furthermore, the encapsulation efficiency and loading capability were investigated as well. Physical stability of NLC demonstrated that particles of system were stable at room temperature and low temperature. Differential scanning calorimetry(DSC) investigation show that the inner structure and recrystallinity of lipid matrix within NLC were greatly influenced by the content of macadamia nut oil.

  15. An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism

    NASA Astrophysics Data System (ADS)

    Nagata, Yuichi

    Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.

  16. [Imaging anatomy of cranial nerves].

    PubMed

    Hermier, M; Leal, P R L; Salaris, S F; Froment, J-C; Sindou, M

    2009-04-01

    Knowledge of the anatomy of the cranial nerves is mandatory for optimal radiological exploration and interpretation of the images in normal and pathological conditions. CT is the method of choice for the study of the skull base and its foramina. MRI explores the cranial nerves and their vascular relationships precisely. Because of their small size, it is essential to obtain images with high spatial resolution. The MRI sequences optimize contrast between nerves and surrounding structures (cerebrospinal fluid, fat, bone structures and vessels). This chapter discusses the radiological anatomy of the cranial nerves.

  17. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  18. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  19. Design and Optimization of Composite Automotive Hatchback Using Integrated Material-Structure-Process-Performance Method

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai

    2018-03-01

    The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.

  20. Synthesis of Nanometric-Sized Barium Titanate Powders Using Acetylacetone as the Chelating Agent in a Sol-Precipitation Process

    NASA Astrophysics Data System (ADS)

    Hung, Kun Ming; Hsieh, Ching Shieh; Yang, Wein Duo; Tsai, Hui Ju

    2007-03-01

    Nanometric-sized barium titanate powders were prepared by using titanium isopropoxid as the raw material and acetylacetone as a chelating agent, in a strong alkaline solution (pH > 13) through the sol-precipitation method. The preparatory variables affect the extent of cross-linking in the structure, change the mode of condensation of the gels, and even control the particle size of the powder. The reaction rate of forming powder, at a higher temperature such as 100°C and more water content (the molar ratio of water to titanium isopropoxide is 25) or fewer acetylacetone (the molar ratio of acetylacetone to titanium isopropoxide is 1), is rapid and the particle size formed is finer at 60 80 nm. On the contrary, that of forming powder, at lower temperature (40°C) and less water content (molar ratio of water/titanium isopropoxide = 5) or higher acetylacetone (acetylacetone/titanium isopropoxide = 7), is slow and the particle size of the powder is larger. The optimal preparatory conditions were obtained by using the experimental statistical method; as a result, nanometric-sized BaTiO3 powder with an average particle size of about 50 nm was prepared.

  1. SU-F-T-540: Comprehensive Fluence Delivery Optimization with Multileaf Collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weppler, S; Villarreal-Barajas, J; Department of Medical Physics, Tom Baker Cancer Center, Calgary, Alberta

    2016-06-15

    Purpose: Multileaf collimator (MLC) leaf sequencing is performed via commercial black-box implementations, on which a user has limited to no access. We have developed an explicit, generic MLC sequencing model to serve as a tool for future investigations of fluence map optimization, fluence delivery optimization, and rotational collimator delivery methods. Methods: We have developed a novel, comprehensive model to effectively account for a variety of transmission and penumbra effects previously treated on an ad hoc basis in the literature. As the model is capable of quantifying a variety of effects, we utilize the asymmetric leakage intensity across each leaf tomore » deliver fluence maps with pixel size smaller than the narrowest leaf width. Developed using linear programming and mixed integer programming formulations, the model is implemented using state of the art open-source solvers. To demonstrate the versatility of the algorithm, a graphical user interface (GUI) was developed in MATLAB capable of accepting custom leaf specifications and transmission parameters. As a preliminary proof-ofconcept, we have sequenced the leaves of a Varian 120 Leaf Millennium MLC for five prostate cancer patient fields and one head and neck field. Predetermined fluence maps have been processed by data smoothing methods to obtain pixel sizes of 2.5 cm{sup 2}. The quality of output was analyzed using computer simulations. Results: For the prostate fields, an average root mean squared error (RMSE) of 0.82 and gamma (0.5mm/0.5%) of 91.4% were observed compared to RMSE and gamma (0.5mm/0.5%) values of 7.04 and 34.0% when the leakage considerations were omitted. Similar results were observed for the head and neck case. Conclusion: A model to sequence MLC leaves to optimality has been proposed. Future work will involve extensive testing and evaluation of the method on clinical MLCs and comparison with black-box leaf sequencing algorithms currently used by commercial treatment planning systems.« less

  2. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  3. Parallel Molecular Distributed Detection With Brownian Motion.

    PubMed

    Rogers, Uri; Koh, Min-Sung

    2016-12-01

    This paper explores the in vivo distributed detection of an undesired biological agent's (BAs) biomarkers by a group of biological sized nanomachines in an aqueous medium under drift. The term distributed, indicates that the system information relative to the BAs presence is dispersed across the collection of nanomachines, where each nanomachine possesses limited communication, computation, and movement capabilities. Using Brownian motion with drift, a probabilistic detection and optimal data fusion framework, coined molecular distributed detection, will be introduced that combines theory from both molecular communication and distributed detection. Using the optimal data fusion framework as a guide, simulation indicates that a sub-optimal fusion method exists, allowing for a significant reduction in implementation complexity while retaining BA detection accuracy.

  4. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    PubMed

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large-spot machine, which gives the planning system more freedom to compensate for the higher sensitivity to uncertainties and interplay effects for lung cancer treatments. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Optimization of a large-scale microseismic monitoring network in northern Switzerland

    NASA Astrophysics Data System (ADS)

    Kraft, Toni; Mignan, Arnaud; Giardini, Domenico

    2013-10-01

    We have developed a network optimization method for regional-scale microseismic monitoring networks and applied it to optimize the densification of the existing seismic network in northeastern Switzerland. The new network will build the backbone of a 10-yr study on the neotectonic activity of this area that will help to better constrain the seismic hazard imposed on nuclear power plants and waste repository sites. This task defined the requirements regarding location precision (0.5 km in epicentre and 2 km in source depth) and detection capability [magnitude of completeness Mc = 1.0 (ML)]. The goal of the optimization was to find the geometry and size of the network that met these requirements. Existing stations in Switzerland, Germany and Austria were considered in the optimization procedure. We based the optimization on the simulated annealing approach proposed by Hardt & Scherbaum, which aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm to: calculate traveltimes of seismic body waves using a finite difference ray tracer and the 3-D velocity model of Switzerland, calculate seismic body-wave amplitudes at arbitrary stations assuming the Brune source model and using scaling and attenuation relations recently derived for Switzerland, and estimate the noise level at arbitrary locations within Switzerland using a first-order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. We calculated optimized geometries for networks with 10-35 added stations and tested the stability of the optimization result by repeated runs with changing initial conditions. Further, we estimated the attainable magnitude of completeness (Mc) for the different sized optimal networks using the Bayesian Magnitude of Completeness (BMC) method introduced by Mignan et al. The algorithm developed in this study is also applicable to smaller optimization problems, for example, small local monitoring networks. Possible applications are volcano monitoring, the surveillance of induced seismicity associated with geotechnical operations and many more. Our algorithm is especially useful to optimize networks in populated areas with heterogeneous noise conditions and if complex velocity structures or existing stations have to be considered.

  6. The evolution of island gigantism and body size variation in tortoises and turtles

    PubMed Central

    Jaffe, Alexander L.; Slater, Graham J.; Alfaro, Michael E.

    2011-01-01

    Extant chelonians (turtles and tortoises) span almost four orders of magnitude of body size, including the startling examples of gigantism seen in the tortoises of the Galapagos and Seychelles islands. However, the evolutionary determinants of size diversity in chelonians are poorly understood. We present a comparative analysis of body size evolution in turtles and tortoises within a phylogenetic framework. Our results reveal a pronounced relationship between habitat and optimal body size in chelonians. We found strong evidence for separate, larger optimal body sizes for sea turtles and island tortoises, the latter showing support for the rule of island gigantism in non-mammalian amniotes. Optimal sizes for freshwater and mainland terrestrial turtles are similar and smaller, although the range of body size variation in these forms is qualitatively greater. The greater number of potential niches in freshwater and terrestrial environments may mean that body size relationships are more complicated in these habitats. PMID:21270022

  7. Sustainable Sizing.

    PubMed

    Robinette, Kathleen M; Veitch, Daisy

    2016-08-01

    To provide a review of sustainable sizing practices that reduce waste, increase sales, and simultaneously produce safer, better fitting, accommodating products. Sustainable sizing involves a set of methods good for both the environment (sustainable environment) and business (sustainable business). Sustainable sizing methods reduce (1) materials used, (2) the number of sizes or adjustments, and (3) the amount of product unsold or marked down for sale. This reduces waste and cost. The methods can also increase sales by fitting more people in the target market and produce happier, loyal customers with better fitting products. This is a mini-review of methods that result in more sustainable sizing practices. It also reviews and contrasts current statistical and modeling practices that lead to poor fit and sizing. Fit-mapping and the use of cases are two excellent methods suited for creating sustainable sizing, when real people (vs. virtual people) are used. These methods are described and reviewed. Evidence presented supports the view that virtual fitting with simulated people and products is not yet effective. Fit-mapping and cases with real people and actual products result in good design and products that are fit for person, fit for purpose, with good accommodation and comfortable, optimized sizing. While virtual models have been shown to be ineffective for predicting or representing fit, there is an opportunity to improve them by adding fit-mapping data to the models. This will require saving fit data, product data, anthropometry, and demographics in a standardized manner. For this success to extend to the wider design community, the development of a standardized method of data collection for fit-mapping with a globally shared fit-map database is needed. It will enable the world community to build knowledge of fit and accommodation and generate effective virtual fitting for the future. A standardized method of data collection that tests products' fit methodically and quantitatively will increase our predictive power to determine fit and accommodation, thereby facilitating improved, effective design. These methods apply to all products people wear, use, or occupy. © 2016, Human Factors and Ergonomics Society.

  8. Multiregion apodized photon sieve with enhanced efficiency and enlarged pinhole sizes.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-08-20

    A novel multiregion structure apodized photon sieve is proposed. The number of regions, the apodization window values, and pinhole sizes of each pinhole ring are all optimized to enhance the energy efficiency and enlarge the pinhole sizes. The design theory and principle are thoroughly proposed and discussed. Two numerically designed apodized photon sieves with the same diameter are given as examples. Comparisons have shown that the multiregion apodized photon sieve has a 25.5% higher energy efficiency and the minimum pinhole size is enlarged by 27.5%. Meanwhile, the two apodized photon sieves have the same form of normalized intensity distribution at the focal plane. This method could improve the flexibility of the design and the fabrication the apodized photon sieve.

  9. Determining the optimal vaccine vial size in developing countries: a Monte Carlo simulation approach.

    PubMed

    Dhamodharan, Aswin; Proano, Ruben A

    2012-09-01

    Outreach immunization services, in which health workers immunize children in their own communities, are indispensable to improve vaccine coverage in rural areas of developing countries. One of the challenges faced by these services is how to reduce high levels of vaccine wastage. In particular, the open vial wastage (OVW) that result from the vaccine doses remaining in a vial after a time for safe use -since opening the vial- has elapsed. This wastage is highly dependent on the choice of vial size and the expected number of participants for which the outreach session is planned (i.e., session size). The use single-dose vials results in zero OVW, but it increases the vaccine purchase, transportation, and holding costs per dose as compared to those resulting from using larger vial sizes. The OVW also decreases when more people are immunized in a session. However, controlling the actual number of people that show to an outreach session in rural areas of developing countries highly depends on factors that are out of control of the immunization planners. This paper integrates a binary integer-programming model to a Monte Carlo simulation method to determine the choice of vial size and the optimal reordering point level to implement an (nQ, r, T) lot-sizing policy that provides the best tradeoff between procurement costs and wastage.

  10. Single cell isolation process with laser induced forward transfer.

    PubMed

    Deng, Yu; Renaud, Philippe; Guo, Zhongning; Huang, Zhigang; Chen, Ying

    2017-01-01

    A viable single cell is crucial for studies of single cell biology. In this paper, laser-induced forward transfer (LIFT) was used to isolate individual cell with a closed chamber designed to avoid contamination and maintain humidity. Hela cells were used to study the impact of laser pulse energy, laser spot size, sacrificed layer thickness and working distance. The size distribution, number and proliferation ratio of separated cells were statistically evaluated. Glycerol was used to increase the viscosity of the medium and alginate were introduced to soften the landing process. The role of laser pulse energy, the spot size and the thickness of titanium in energy absorption in LIFT process was theoretically analyzed with Lambert-Beer and a thermal conductive model. After comprehensive analysis, mechanical damage was found to be the dominant factor affecting the size and proliferation ratio of the isolated cells. An orthogonal experiment was conducted, and the optimal conditions were determined as: laser pulse energy, 9 μJ; spot size, 60 μm; thickness of titanium, 12 nm; working distance, 700 μm;, glycerol, 2% and alginate depth, greater than 1 μm. With these conditions, along with continuous incubation, a single cell could be transferred by the LIFT with one shot, with limited effect on cell size and viability. LIFT conducted in a closed chamber under optimized condition is a promising method for reliably isolating single cells.

  11. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods.

  12. Dynamic Obstacle Avoidance for Unmanned Underwater Vehicles Based on an Improved Velocity Obstacle Method

    PubMed Central

    Zhang, Wei; Wei, Shilin; Teng, Yanbin; Zhang, Jianku; Wang, Xiufang; Yan, Zheping

    2017-01-01

    In view of a dynamic obstacle environment with motion uncertainty, we present a dynamic collision avoidance method based on the collision risk assessment and improved velocity obstacle method. First, through the fusion optimization of forward-looking sonar data, the redundancy of the data is reduced and the position, size and velocity information of the obstacles are obtained, which can provide an accurate decision-making basis for next-step collision avoidance. Second, according to minimum meeting time and the minimum distance between the obstacle and unmanned underwater vehicle (UUV), this paper establishes the collision risk assessment model, and screens key obstacles to avoid collision. Finally, the optimization objective function is established based on the improved velocity obstacle method, and a UUV motion characteristic is used to calculate the reachable velocity sets. The optimal collision speed of UUV is searched in velocity space. The corresponding heading and speed commands are calculated, and outputted to the motion control module. The above is the complete dynamic obstacle avoidance process. The simulation results show that the proposed method can obtain a better collision avoidance effect in the dynamic environment, and has good adaptability to the unknown dynamic environment. PMID:29186878

  13. Microelectromechanical resonator and method for fabrication

    DOEpatents

    Wittwer, Jonathan W [Albuquerque, NM; Olsson, Roy H [Albuquerque, NM

    2009-11-10

    A method is disclosed for the robust fabrication of a microelectromechanical (MEM) resonator. In this method, a pattern of holes is formed in the resonator mass with the position, size and number of holes in the pattern being optimized to minimize an uncertainty .DELTA.f in the resonant frequency f.sub.0 of the MEM resonator due to manufacturing process variations (e.g. edge bias). A number of different types of MEM resonators are disclosed which can be formed using this method, including capacitively transduced Lame, wineglass and extensional resonators, and piezoelectric length-extensional resonators.

  14. Microelectromechanical resonator and method for fabrication

    DOEpatents

    Wittwer, Jonathan W [Albuquerque, NM; Olsson, Roy H [Albuquerque, NM

    2010-01-26

    A method is disclosed for the robust fabrication of a microelectromechanical (MEM) resonator. In this method, a pattern of holes is formed in the resonator mass with the position, size and number of holes in the pattern being optimized to minimize an uncertainty .DELTA.f in the resonant frequency f.sub.0 of the MEM resonator due to manufacturing process variations (e.g. edge bias). A number of different types of MEM resonators are disclosed which can be formed using this method, including capacitively transduced Lame, wineglass and extensional resonators, and piezoelectric length-extensional resonators.

  15. A coherent discrete variable representation method on a sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua -Gen

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  16. A coherent discrete variable representation method on a sphere

    DOE PAGES

    Yu, Hua -Gen

    2017-09-05

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  17. Simulation and optimization of faceted structure for illumination

    NASA Astrophysics Data System (ADS)

    Liu, Lihong; Engel, Thierry; Flury, Manuel

    2016-04-01

    The re-direction of incoherent light using a surface containing only facets with specific angular values is proposed. A new photometric approach is adopted since the size of each facet is large in comparison with the wavelength. A reflective configuration is employed to avoid the dispersion problems of materials. The irradiance distribution of the reflected beam is determined by the angular position of each facet. In order to obtain the specific irradiance distribution, the angular position of each facet is optimized using Zemax OpticStudio 15 software. A detector is placed in the direction which is perpendicular to the reflected beam. According to the incoherent irradiance distribution on the detector, a merit function needs to be defined to pilot the optimization process. The two dimensional angular position of each facet is defined as a variable which is optimized within a specified varying range. Because the merit function needs to be updated, a macro program is carried out to update this function within Zemax. In order to reduce the complexity of the manual operation, an automatic optimization approach is established. Zemax is in charge of performing the optimization task and sending back the irradiance data to Matlab for further analysis. Several simulation results are given for the verification of the optimization method. The simulation results are compared to those obtained with the LightTools software in order to verify our optimization method.

  18. An adequacy-constrained integrated planning method for effective accommodation of DG and electric vehicles in smart distribution systems

    NASA Astrophysics Data System (ADS)

    Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming

    2018-06-01

    This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.

  19. [Exploration of one-step preparation of Ganoderma lucidum multicomponent microemulsion].

    PubMed

    He, Jun-Jie; Chen, Yan; Du, Meng; Cao, Wei; Yuan, Ling; Zheng, Li-Yan

    2013-03-01

    To explore one-step method for the preparation of Ganoderma lucidum multicomponent microemulsion, according to the dissolution characteristics of triterpenes and polysaccharides in Ganoderma lucidum, formulation of the microemulsion was optimized. The optimal blank microemulsion was used as a solvent to sonicate the Ganoderma lucidum powder to prepare the multicomponent microemulsion, besides, its physicochemical properties were compared with the microemulsion made by conventional method. The results showed that the multicomponent microemulsion was characterized as (43.32 +/- 6.82) nm in size, 0.173 +/- 0.025 in polydispersity index (PDI) and -(3.98 +/- 0.82) mV in zeta potential. The contents of Ganoderma lucidum triterpenes and polysaccharides were (5.95 +/- 0.32) and (7.58 +/- 0.44) mg x mL(-1), respectively. Sonicating Ganoderma lucidum powder by blank microemulsion could prepare the multicomponent microemulsion. Compared with the conventional method, this method is simple and low cost, which is suitable for industrial production.

  20. School Cost Functions: A Meta-Regression Analysis

    ERIC Educational Resources Information Center

    Colegrave, Andrew D.; Giles, Margaret J.

    2008-01-01

    The education cost literature includes econometric studies attempting to determine economies of scale, or estimate an optimal school or district size. Not only do their results differ, but the studies use dissimilar data, techniques, and models. To derive value from these studies requires that the estimates be made comparable. One method to do…

  1. DFT energy optimization of a large carbohydrate: cyclomaltohexaicosaose (CA-26)

    USDA-ARS?s Scientific Manuscript database

    CA-26 is the largest cyclodextrin (546 atoms) for which refined X-ray structural data is available. Because of its size, 26 D-glucose residues, it is beyond the scope of study of most ab initio or density functional methods, and to date has only been computationally examined using empirical force fi...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sornadurai, D.; Ravindran, T. R.; Paul, V. Thomas

    Synthesis parameters are optimized in order to grow single crystals of multiferroic BiFeO{sub 3}. 2 to 3 mm size pyramid (tetrahedron) shaped single crystals were successfully obtained by solvothermal method. Scanning electron microscopy with EDAX confirmed the phase formation. Raman scattering spectra of bulk BiFeO3 single crystals have been measured which match well with reported spectra.

  3. Structures for the 3rd Generation Reusable Concept Vehicle

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2001-01-01

    A major goal of NASA is to create an advance space transportation system that provides a safe, affordable highway through the air and into space. The long-term plans are to reduce the risk of crew loss to 1 in 1,000,000 missions and reduce the cost of Low-Earth Orbit by a factor of 100 from today's costs. A third generation reusable concept vehicle (RCV) was developed to assess technologies required to meet NASA's space access goals. The vehicle will launch from Cape Kennedy carrying a 25,000 lb. payload to the International Space Station (ISS). The system is an air breathing launch vehicle (ABLV) hypersonic lifting body with rockets and uses triple point hydrogen and liquid oxygen propellant. The focus of this paper is on the structural concepts and analysis methods used in developing the third generation reusable launch vehicle (RLV). Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  4. Design of off-statistics axial-flow fans by means of vortex law optimization

    NASA Astrophysics Data System (ADS)

    Lazari, Andrea; Cattanei, Andrea

    2014-12-01

    Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.

  5. Impact of tube current modulation on lesion conspicuity index in hi-resolution chest computed tomography

    NASA Astrophysics Data System (ADS)

    Szczepura, Katy; Tomkinson, David; Manning, David

    2017-03-01

    Tube current modulation is a method employed in the use of CT in an attempt to optimize radiation dose to the patient. The acceptable noise (noise index) can be varied, based on the level of optimization required; higher accepted noise reduces the patient dose. Recent research [1] suggests that measuring the conspicuity index (C.I.) of focal lesions within an image is more reflective of a clinical reader's ability to perceive focal lesions than traditional physical measures such as contrast to noise (CNR) and signal to noise ratio (SNR). Software has been developed and validated to calculate the C.I. in DICOM images. The aim of this work is assess the impact of tube current modulation on conspicuity index and CTDIvol, to indicate the benefits and limitations of tube current modulation on lesion detectability. Method An anthropomorphic chest phantom was used "Lungman" with inserted lesions of varying size and HU (see table below) a range of Hounsfield units and sizes were used to represent the variation in lesion Hounsfield units found. This meant some lesions had negative Hounsfield unit values.

  6. Optimization of L-asparaginase production from novel Enterobacter sp., by submerged fermentation using response surface methodology.

    PubMed

    Erva, Rajeswara Reddy; Goswami, Ajgebi Nath; Suman, Priyanka; Vedanabhatla, Ravali; Rajulapati, Satish Babu

    2017-03-16

    The culture conditions and nutritional rations influencing the production of extra cellular antileukemic enzyme by novel Enterobacter aerogenes KCTC2190/MTCC111 were optimized in shake-flask culture. Process variables like pH, temperature, incubation time, carbon and nitrogen sources, inducer concentration, and inoculum size were taken into account. In the present study, finest enzyme activity achieved by traditional one variable at a time method was 7.6 IU/mL which was a 2.6-fold increase compared to the initial value. Further, the L-asparaginase production was optimized using response surface methodology, and validated experimental result at optimized process variables gave 18.35 IU/mL of L-asparaginase activity, which is 2.4-times higher than the traditional optimization approach. The study explored the E. aerogenes MTCC111 as a potent and potential bacterial source for high yield of antileukemic drug.

  7. Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm

    PubMed Central

    Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui

    2016-01-01

    Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365

  8. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.

  9. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  10. Efficient Nondomination Level Update Method for Steady-State Evolutionary Multiobjective Optimization.

    PubMed

    Li, Ke; Deb, Kalyanmoy; Zhang, Qingfu; Zhang, Qiang

    2017-09-01

    Nondominated sorting (NDS), which divides a population into several nondomination levels (NDLs), is a basic step in many evolutionary multiobjective optimization (EMO) algorithms. It has been widely studied in a generational evolution model, where the environmental selection is performed after generating a whole population of offspring. However, in a steady-state evolution model, where a population is updated right after the generation of a new candidate, the NDS can be extremely time consuming. This is especially severe when the number of objectives and population size become large. In this paper, we propose an efficient NDL update method to reduce the cost for maintaining the NDL structure in steady-state EMO. Instead of performing the NDS from scratch, our method only updates the NDLs of a limited number of solutions by extracting the knowledge from the current NDL structure. Notice that our NDL update method is performed twice at each iteration. One is after the reproduction, the other is after the environmental selection. Extensive experiments fully demonstrate that, comparing to the other five state-of-the-art NDS methods, our proposed method avoids a significant amount of unnecessary comparisons, not only in the synthetic data sets, but also in some real optimization scenarios. Last but not least, we find that our proposed method is also useful for the generational evolution model.

  11. Cancer stem cells and cell size: A causal link?

    PubMed

    Li, Qiuhui; Rycaj, Kiera; Chen, Xin; Tang, Dean G

    2015-12-01

    The majority of normal animal cells are 10-20 μm in diameter. Many signaling mechanisms, notably PI3K/Akt/mTOR, Myc, and Hippo pathways, tightly control and coordinate cell growth, cell size, cell division, and cell number during homeostasis. These regulatory mechanisms are frequently deregulated during tumorigenesis resulting in wide variations in cell sizes and increased proliferation in cancer cells. Here, we first review the evidence that primitive stem cells in adult tissues are quiescent and generally smaller than their differentiated progeny, suggesting a correlation between small cell sizes with the stemness. Conversely, increased cell size positively correlates with differentiation phenotypes. We then discuss cancer stem cells (CSCs) and present some evidence that correlates cell sizes with CSC activity. Overall, a causal link between CSCs and cell size is relatively weak and remains to be rigorously assessed. In the future, optimizing methods for isolating cells based on size should help elucidate the connection between cancer cell size and CSC characteristics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Development of nanostructured lipid carriers containing salicyclic acid for dermal use based on the Quality by Design method.

    PubMed

    Kovács, A; Berkó, Sz; Csányi, E; Csóka, I

    2017-03-01

    The aim of our present work was to evaluate the applicability of the Quality by Design (QbD) methodology in the development and optimalization of nanostructured lipid carriers containing salicyclic acid (NLC SA). Within the Quality by Design methology, special emphasis is layed on the adaptation of the initial risk assessment step in order to properly identify the critical material attributes and critical process parameters in formulation development. NLC SA products were formulated by the ultrasonication method using Compritol 888 ATO as solid lipid, Miglyol 812 as liquid lipid and Cremophor RH 60® as surfactant. LeanQbD Software and StatSoft. Inc. Statistica for Windows 11 were employed to indentify the risks. Three highly critical quality attributes (CQAs) for NLC SA were identified, namely particle size, particle size distribution and aggregation. Five attributes of medium influence were identified, including dissolution rate, dissolution efficiency, pH, lipid solubility of the active pharmaceutical ingredient (API) and entrapment efficiency. Three critical material attributes (CMA) and critical process parameters (CPP) were identified: surfactant concentration, solid lipid/liquid lipid ratio and ultrasonication time. The CMAs and CPPs are considered as independent variables and the CQAs are defined as dependent variables. The 2 3 factorial design was used to evaluate the role of the independent and dependent variables. Based on our experiments, an optimal formulation can be obtained when the surfactant concentration is set to 5%, the solid lipid/liquid lipid ratio is 7:3 and ultrasonication time is 20min. The optimal NLC SA showed narrow size distribution (0.857±0.014) with a mean particle size of 114±2.64nm. The NLC SA product showed a significantly higher in vitro drug release compared to the micro-particle reference preparation containing salicylic acid (MP SA). Copyright © 2016 Elsevier B.V. All rights reserved.

  13. New inhalation-optimized itraconazole nanoparticle-based dry powders for the treatment of invasive pulmonary aspergillosis

    PubMed Central

    Duret, Christophe; Wauthoz, Nathalie; Sebti, Thami; Vanderbist, Francis; Amighi, Karim

    2012-01-01

    Purpose Itraconazole (ITZ) dry powders for inhalation (DPI) composed of nanoparticles (NP) embedded in carrier microparticles were prepared and characterized. Methods DPIs were initially produced by reducing the ITZ particle size to the nanometer range using high-pressure homogenization with tocopherol polyethylene 1000 succinate (TPGS, 10% w/w ITZ) as a stabilizer. The optimized nanosuspension and the initial microsuspension were then spray-dried with different proportions of or in the absence of mannitol and/or sodium taurocholate. DPI characterization was performed using scanning electron microscopy for morphology, laser diffraction to evaluate the size-reduction process, and the size of the dried NP when reconstituted in aqueous media, impaction studies using a multistage liquid impactor to determine the aerodynamic performance and fine-particle fraction that is theoretically able to reach the lung, and dissolution studies to determine the solubility of ITZ. Results Scanning electron microscopy micrographs showed that the DPI particles were composed of mannitol microparticles with embedded nano- or micro-ITZ crystals. The formulations prepared from the nanosuspension exhibited good flow properties and better fine-particle fractions, ranging from 46.2% ± 0.5% to 63.2% ± 1.7% compared to the 23.1% ± 0.3% that was observed with the formulation produced from the initial microsuspension. Spray-drying affected the NP size by inducing irreversible aggregation, which was able to be minimized by the addition of mannitol and sodium taurocholate before the drying procedure. The ITZ NP-based DPI considerably increased the ITZ solubility (58 ± 2 increased to 96 ± 1 ng/mL) compared with that of raw ITZ or an ITZ microparticle-based DPI (<10 ng/mL). Conclusion Embedding ITZ NP in inhalable microparticles is a very effective method to produce DPI formulations with optimal aerodynamic properties and enhanced ITZ solubility. These formulations could be applied to other poorly water-soluble drugs and could be a very effective alternative for treating invasive pulmonary aspergillosis. PMID:23093903

  14. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  15. Nano-sized crystalline drug production by milling technology.

    PubMed

    Moribe, Kunikazu; Ueda, Keisuke; Limwikrant, Waree; Higashi, Kenjirou; Yamamoto, Keiji

    2013-01-01

    Nano-formulation of poorly water-soluble drugs has been developed to enhance drug dissolution. In this review, we introduce nano-milling technology described in recently published papers. Factors affecting the size of drug crystals are compared based on the preparation methods and drug and excipient types. A top-down approach using the comminution process is a method conventionally used to prepare crystalline drug nanoparticles. Wet milling using media is well studied and several wet-milled drug formulations are now on the market. Several trials on drug nanosuspension preparation using different apparatuses, materials, and conditions have been reported. Wet milling using a high-pressure homogenizer is another alternative to preparing production-scale drug nanosuspensions. Dry milling is a simple method of preparing a solid-state drug nano-formulation. The effect of size on the dissolution of a drug from nanoparticles is an area of fundamental research, but it is sometimes incorrectly evaluated. Here, we discuss evaluation procedures and the associated problems. Lastly, the importance of quality control, process optimization, and physicochemical characterization are briefly discussed.

  16. On-chip generation of microbubbles as a practical technology for manufacturing contrast agents for ultrasonic imaging

    PubMed Central

    Hettiarachchi, Kanaka; Talu, Esra; Longo, Marjorie L.; Dayton, Paul A.; Lee, Abraham P.

    2007-01-01

    This paper presents a new manufacturing method to generate monodisperse microbubble contrast agents with polydispersity index (σ) values of <2% through microfluidic flow-focusing. Micron-sized lipid shell-based perfluorocarbon (PFC) gas microbubbles for use as ultrasound contrast agents were produced using this method. The poly(dimethylsiloxane) (PDMS)-based devices feature expanding nozzle geometry with a 7 μm orifice width, and are robust enough for consistent production of microbubbles with runtimes lasting several hours. With high-speed imaging, we characterized relationships between channel geometry, liquid flow rate Q, and gas pressure P in controlling bubble sizes. By a simple optimization of the channel geometry and Q and P, bubbles with a mean diameter of <5 μm can be obtained, ideal for various ultrasonic imaging applications. This method demonstrates the potential of microfluidics as an efficient means for custom-designing ultrasound contrast agents with precise size distributions, different gas compositions and new shell materials for stabilization, and for future targeted imaging and therapeutic applications. PMID:17389962

  17. Co-Design Method and Wafer-Level Packaging Technique of Thin-Film Flexible Antenna and Silicon CMOS Rectifier Chips for Wireless-Powered Neural Interface Systems.

    PubMed

    Okabe, Kenji; Jeewan, Horagodage Prabhath; Yamagiwa, Shota; Kawano, Takeshi; Ishida, Makoto; Akita, Ippei

    2015-12-16

    In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction.

  18. Co-Design Method and Wafer-Level Packaging Technique of Thin-Film Flexible Antenna and Silicon CMOS Rectifier Chips for Wireless-Powered Neural Interface Systems

    PubMed Central

    Okabe, Kenji; Jeewan, Horagodage Prabhath; Yamagiwa, Shota; Kawano, Takeshi; Ishida, Makoto; Akita, Ippei

    2015-01-01

    In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction. PMID:26694407

  19. The effect of nanoparticle size on theranostic systems: the optimal particle size for imaging is not necessarily optimal for drug delivery

    NASA Astrophysics Data System (ADS)

    Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela

    2018-02-01

    Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.

  20. Development of biopolymers based interpenetrating polymeric network of capecitabine: A drug delivery vehicle to extend the release of the model drug.

    PubMed

    Upadhyay, Mansi; Adena, Sandeep Kumar Reddy; Vardhan, Harsh; Yadav, Sarita K; Mishra, Brahmeshwar

    2018-04-27

    The research aims the development and optimization of capecitabine loaded interpenetrating polymeric network by ionotropic gelation method using polymers locust bean gum and sodium alginate by QbD approach. FMEA was performed to recognize the risks influencing CQAs. BBD was applied to study the effect of factors (polymer ratio, amount of cross-linker and curing time) on responses (particle size, % drug entrapment and % drug release). Polynomial equations and 3-D graphs were plotted to relate between factors and responses. The results of the optimized batch viz. particle size (457.92 ± 1.6 μm), % drug entrapment (74.11 ± 3.1%) and % drug release (90.23 ± 2.1%) were close to the predicted values generated by Minitab® 17. Characterization techniques SEM, EDX, FTIR, DSC and XRD were also performed for the optimized batch. To study the water transport inside IPN microbeads, swelling study was done. In vitro drug release of optimized batch showed controlled drug release for 12 h. Pharmacokinetic study carried out following oral administration in Albino Wistar rats exhibited that optimized microbeads had better PK parameters than free drug. In vitro cytotoxicity against HT-29 cells revealed significant reduction of the cell growth when treated with optimized formulation indicating IPN microbeads as effective dosage form for treating colon cancer. Copyright © 2018. Published by Elsevier B.V.

  1. High efficient perovskite solar cell material CH3NH3PbI3: Synthesis of films and their characterization

    NASA Astrophysics Data System (ADS)

    Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas

    2018-04-01

    Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.

  2. Quartz crystal microbalance as a sensing active element for rupture scanning within frequency band.

    PubMed

    Dultsev, F N; Kolosovsky, E A

    2011-02-14

    A new method based on the use of quartz crystal microbalance (QCM) as an active sensing element is developed, optimized and tested in a model system to measure the rupture force and deduce size distribution of nanoparticles. As suggested by model predictions, the QCM is shaped as a strip. The ratio of rupture signals at the second and the third harmonics versus the geometric position of a body on QCM surface is investigated theoretically. Recommendations concerning the use of the method for measuring the nanoparticle size distribution are presented. It is shown experimentally for an ensemble of test particles with a characteristic size within 20-30 nm that the proposed method allows one to determine particle size distribution. On the basis of the position and value of the measured rupture signal, a histogram of particle size distribution and percentage of each size fraction were determined. The main merits of the bond-rupture method are its rapid response, simplicity and the ability to discriminate between specific and non-specific interactions. The method is highly sensitive with respect to mass (the sensitivity is generally dependent on the chemical nature of receptor and analyte and may reach 8×10(-14) g mm(-2)) and applicable to measuring rupture forces either for weak bonds, for example hydrogen bonds, or for strong covalent bonds (10(-11)-10(-9) N). This procedure may become a good alternative for the existing methods, such as AFM or optical methods of determining biological objects, and win a broad range of applications both in laboratory research and in biosensing for various purposes. Possible applications include medicine, diagnostics, environmental or agricultural monitoring. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Optimization of air gap for two-dimensional imaging system using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Zeniya, Tsutomu; Takeda, Tohoru; Yu, Quanwen; Hyodo, Kazuyuki; Yuasa, Tetsuya; Aiyoshi, Yuji; Hiranaka, Yukio; Itai, Yuji; Akatsuka, Takao

    2000-11-01

    Since synchrotron radiation (SR) has several excellent properties such as high brilliance, broad continuous energy spectrum and small divergence, we can obtain x-ray images with high contrast and high spatial resolution by using of SR. In 2D imaging using SR, air gap method is very effective to reduce the scatter contamination. However, to use air gap method, the geometrical effect of finite source size of SR must be considered because spatial resolution of image is degraded by air gap. For 2D x-ray imaging with SR, x-ray mammography was chosen to examine the effect of air gap method. We theoretically discussed the optimization of air gap distance suing effective scatter point source model proposed by Muntz, and executed experiment with a newly manufactured monochromator with asymmetrical reflection and an imaging plate.

  4. Transparent ceramic garnet scintillator optimization via composition and co-doping for high-energy resolution gamma spectrometers (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cherepy, Nerine J.; Payne, Stephen A.; Seeley, Zachary M.; Beck, Patrick R.; Swanberg, Erik L.; Hunter, Steven L.

    2016-09-01

    Breakthrough energy resolution, R(662keV) <4%, has been achieved with an oxide scintillator, Cerium-doped Gadolinium Yttrium Gallium Aluminum Garnet, or GYGAG(Ce), by optimizing fabrication conditions. Here we describe the dependence of scintillation light yield and energy resolution on several variables: (1) Stoichiometry, in particular Gd/Y and Ga/Al ratios which modify the bandgap energy, (2) Processing methods, including vacuum vs. oxygen sintering, and (3) Trace co-dopants that influence the formation of Ce4+ and modify the intra-bandgap trap distribution. To learn about how chemical composition influences the scintillation properties of transparent ceramic garnet scintillators, we have measured: scintillation decay component amplitudes; intensity and duration of afterglow; thermoluminescence glow curve peak positions and amplitudes; integrated light yield; light yield non-proportionality, as measured in the Scintillator Light Yield Non-Proportionality Characterization Instrument (SLYNCI); and energy resolution for gamma spectroscopy. Optimized GYGAG(Ce) provides R(662 keV) =3.0%, for 0.05 cm3 size ceramics with Silicon photodiode readout, and R(662 keV) =4.6%, at 2 in3 size with PMT readout.

  5. Conceptual design of the 6 MW Mod-5A wind turbine generator

    NASA Technical Reports Server (NTRS)

    Barton, R. S.; Lucas, W. C.

    1982-01-01

    The General Electric Company, Advanced Energy Programs Department, is designing under DOE/NASA sponsorship the MOD-5A wind turbine system which must generate electricity for 3.75 cent/KWH (1980) or less. During the Conceptual Design Phase, completed in March, 1981, the MOD-5A WTG system size and features were established as a result of tradeoff and optimization studies driven by minimizing the system cost of energy (COE). This led to a 400' rotor diameter size. The MOD-5A system which resulted is defined in this paper along with the operational and environmental factors that drive various portions of the design. Development of weight and cost estimating relationships (WCER's) and their use in optimizing the MOD-5A are discussed. The results of major tradeoff studies are also presented. Subsystem COE contributions for the 100th unit are shown along with the method of computation. Detailed descriptions of the major subsystems are given, in order that the results of the various trade and optimization studies can be more readily visualized.

  6. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  7. Design of focused and restrained subsets from extremely large virtual libraries.

    PubMed

    Jamois, Eric A; Lin, Chien T; Waldman, Marvin

    2003-11-01

    With the current and ever-growing offering of reagents along with the vast palette of organic reactions, virtual libraries accessible to combinatorial chemists can reach sizes of billions of compounds or more. Extracting practical size subsets for experimentation has remained an essential step in the design of combinatorial libraries. A typical approach to computational library design involves enumeration of structures and properties for the entire virtual library, which may be unpractical for such large libraries. This study describes a new approach termed as on the fly optimization (OTFO) where descriptors are computed as needed within the subset optimization cycle and without intermediate enumeration of structures. Results reported herein highlight the advantages of coupling an ultra-fast descriptor calculation engine to subset optimization capabilities. We also show that enumeration of properties for the entire virtual library may not only be unpractical but also wasteful. Successful design of focused and restrained subsets can be achieved while sampling only a small fraction of the virtual library. We also investigate the stability of the method and compare results obtained from simulated annealing (SA) and genetic algorithms (GA).

  8. [Optimize preparation of compound licorice microemulsion with D-optimal design].

    PubMed

    Ma, Shu-Wei; Wang, Yong-Jie; Chen, Cheng; Qiu, Yue; Wu, Qing

    2018-03-01

    In order to increase the solubility of essential oil in compound licorice microemulsion and improve the efficacy of the decoction for treating chronic eczema, this experiment intends to prepare the decoction into microemulsion. The essential oil was used as the oil phase of the microemulsion and the extract was used as the water phase. Then the microemulsion area and maximum ratio of water capacity was obtained by plotting pseudo-ternary phase diagram, to determine the appropriate types of surfactant and cosurfactant, and Km value-the mass ratio between surfactant and cosurfactant. With particle size and skin retention of active ingredients as the index, microemulsion prescription was optimized by D-optimal design method, to investigate the in vitro release behavior of the optimized prescription. The results showed that the microemulsion was optimal with tween-80 as the surfactant and anhydrous ethanol as the cosurfactant. When the Km value was 1, the area of the microemulsion region was largest while when the concentration of extract was 0.5 g·mL⁻¹, it had lowest effect on the particle size distribution of microemulsion. The final optimized formulation was as follows: 9.4% tween-80, 9.4% anhydrous ethanol, 1.0% peppermint oil and 80.2% 0.5 g·mL⁻¹ extract. The microemulsion prepared under these conditions had a small viscosity, good stability and high skin retention of drug; in vitro release experiment showed that microemulsion had a sustained-release effect on glycyrrhizic acid and liquiritin, basically achieving the expected purpose of the project. Copyright© by the Chinese Pharmaceutical Association.

  9. Construction and cellular uptake behavior of redox-sensitive docetaxel prodrug-loaded liposomes.

    PubMed

    Ren, Guolian; Jiang, Mengjuan; Guo, Weiling; Sun, Bingjun; Lian, He; Wang, Yongjun; He, Zhonggui

    2018-01-01

    A redox-responsive docetaxel (DTX) prodrug consisting of a disulfide linkage between DTX and vitamin E (DTX-SS-VE) was synthesized in our laboratory and was successfully formulated into liposomes. The aim of this study was to optimize the formulation and investigate the cellular uptake of DTX prodrug-loaded liposomes (DPLs). The content of DTX-SS-VE was determined by ultrahigh-performance liquid chromatography (UPLC). The formulation and process were optimized using entrapment efficiency (EE), drug-loading (DL), particle size and polydispersity index (PDI) as the evaluation indices. The optimal formulation was as follows: drug/lipid ratio of 1:12, cholesterol/lipid ratio of 1:10, hydration temperature of 40 °C, sonication power and time of 400 W and 5 min. The EE, DL and particle size of the optimized DPLs were 97.60 ± 0.03%, 7.09 ± 0.22% and 93.06 ± 0.72 nm, respectively. DPLs had good dilution stability under the physiological conditions over 24 h. In addition, DPLs were found to enter tumor cells via different pathways and released DTX from the prodrug to induce apoptosis. Taken together, the optimized formulation and process were found to be a simple, stable and applicable method for the preparation of DPLs that could successfully escape from lysosomes.

  10. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  11. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  12. Screening reservoir systems by considering the efficient trade-offs—informing infrastructure investment decisions on the Blue Nile

    NASA Astrophysics Data System (ADS)

    Geressu, Robel T.; Harou, Julien J.

    2015-12-01

    Multi-reservoir system planners should consider how new dams impact downstream reservoirs and the potential contribution of each component to coordinated management. We propose an optimized multi-criteria screening approach to identify best performing designs, i.e., the selection, size and operating rules of new reservoirs within multi-reservoir systems. Reservoir release operating rules and storage sizes are optimized concurrently for each separate infrastructure design under consideration. Outputs reveal system trade-offs using multi-dimensional scatter plots where each point represents an approximately Pareto-optimal design. The method is applied to proposed Blue Nile River reservoirs in Ethiopia, where trade-offs between total and firm energy output, aggregate storage and downstream irrigation and energy provision for the best performing designs are evaluated. This proof-of concept study shows that recommended Blue Nile system designs would depend on whether monthly firm energy or annual energy is prioritized. 39 TWh/yr of energy potential is available from the proposed Blue Nile reservoirs. The results show that depending on the amount of energy deemed sufficient, the current maximum capacities of the planned reservoirs could be larger than they need to be. The method can also be used to inform which of the proposed reservoir type and their storage sizes would allow for the highest downstream benefits to Sudan in different objectives of upstream operating objectives (i.e., operated to maximize either average annual energy or firm energy). The proposed approach identifies the most promising system designs, reveals how they imply different trade-offs between metrics of system performance, and helps system planners asses the sensitivity of overall performance to the design parameters of component reservoirs.

  13. Improving the Curie depth estimation through optimizing the spectral block dimensions of the aeromagnetic data in the Sabalan geothermal field

    NASA Astrophysics Data System (ADS)

    Akbar, Somaieh; Fathianpour, Nader

    2016-12-01

    The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.

  14. Optimization and scale up of microfluidic nanolipomer production method for preclinical and potential clinical trials.

    PubMed

    Gdowski, Andrew; Johnson, Kaitlyn; Shah, Sunil; Gryczynski, Ignacy; Vishwanatha, Jamboor; Ranjan, Amalendu

    2018-02-12

    The process of optimization and fabrication of nanoparticle synthesis for preclinical studies can be challenging and time consuming. Traditional small scale laboratory synthesis techniques suffer from batch to batch variability. Additionally, the parameters used in the original formulation must be re-optimized due to differences in fabrication techniques for clinical production. Several low flow microfluidic synthesis processes have been reported in recent years for developing nanoparticles that are a hybrid between polymeric nanoparticles and liposomes. However, use of high flow microfluidic synthetic techniques has not been described for this type of nanoparticle system, which we will term as nanolipomer. In this manuscript, we describe the successful optimization and functional assessment of nanolipomers fabricated using a microfluidic synthesis method under high flow parameters. The optimal total flow rate for synthesis of these nanolipomers was found to be 12 ml/min and flow rate ratio 1:1 (organic phase: aqueous phase). The PLGA polymer concentration of 10 mg/ml and a DSPE-PEG lipid concentration of 10% w/v provided optimal size, PDI and stability. Drug loading and encapsulation of a representative hydrophobic small molecule drug, curcumin, was optimized and found that high encapsulation efficiency of 58.8% and drug loading of 4.4% was achieved at 7.5% w/w initial concentration of curcumin/PLGA polymer. The final size and polydispersity index of the optimized nanolipomer was 102.11 nm and 0.126, respectively. Functional assessment of uptake of the nanolipomers in C4-2B prostate cancer cells showed uptake at 1 h and increased uptake at 24 h. The nanolipomer was more effective in the cell viability assay compared to free drug. Finally, assessment of in vivo retention in mice of these nanolipomers revealed retention for up to 2 h and were completely cleared at 24 h. In this study, we have demonstrated that a nanolipomer formulation can be successfully synthesized and easily scaled up through a high flow microfluidic system with optimal characteristics. The process of developing nanolipomers using this methodology is significant as the same optimized parameters used for small batches could be translated into manufacturing large scale batches for clinical trials through parallel flow systems.

  15. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  17. Controlling dental enamel-cavity ablation depth with optimized stepping parameters along the focal plane normal using a three axis, numerically controlled picosecond laser.

    PubMed

    Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong

    2015-02-01

    The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.

  18. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  19. Finite-element design and optimization of a three-dimensional tetrahedral porous titanium scaffold for the reconstruction of mandibular defects.

    PubMed

    Luo, Danmei; Rong, Qiguo; Chen, Quan

    2017-09-01

    Reconstruction of segmental defects in the mandible remains a challenge for maxillofacial surgery. The use of porous scaffolds is a potential method for repairing these defects. Now, additive manufacturing techniques provide a solution for the fabrication of porous scaffolds with specific geometrical shapes and complex structures. The goal of this study was to design and optimize a three-dimensional tetrahedral titanium scaffold for the reconstruction of mandibular defects. With a fixed strut diameter of 0.45mm and a mean cell size of 2.2mm, a tetrahedral structural porous scaffold was designed for a simulated anatomical defect derived from computed tomography (CT) data of a human mandible. An optimization method based on the concept of uniform stress was performed on the initial scaffold to realize a minimal-weight design. Geometric and mechanical comparisons between the initial and optimized scaffold show that the optimized scaffold exhibits a larger porosity, 81.90%, as well as a more homogeneous stress distribution. These results demonstrate that tetrahedral structural titanium scaffolds are feasible structures for repairing mandibular defects, and that the proposed optimization scheme has the ability to produce superior scaffolds for mandibular reconstruction with better stability, higher porosity, and less weight. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.

    PubMed

    Jiang, Z; Chen, W; Burkhart, C

    2013-11-01

    Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  1. Mitigation of Adverse Effects Caused by Shock Wave Boundary Layer Interactions Through Optimal Wall Shaping

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Lee, Byung Joon

    2013-01-01

    It is known that the adverse effects of shock wave boundary layer interactions in high speed inlets include reduced total pressure recovery and highly distorted flow at the aerodynamic interface plane (AIP). This paper presents a design method for flow control which creates perturbations in geometry. These perturbations are tailored to change the flow structures in order to minimize shock wave boundary layer interactions (SWBLI) inside supersonic inlets. Optimizing the shape of two dimensional micro-size bumps is shown to be a very effective flow control method for two-dimensional SWBLI. In investigating the three dimensional SWBLI, a square duct is employed as a baseline. To investigate the mechanism whereby the geometric elements of the baseline, i.e. the bottom wall, the sidewall and the corner, exert influence on the flow's aerodynamic characteristics, each element is studied and optimized separately. It is found that arrays of micro-size bumps on the bottom wall of the duct have little effect in improving total pressure recovery though they are useful in suppressing the incipient separation in three-dimensional problems. Shaping sidewall geometry is effective in re-distributing flow on the side wall and results in a less distorted flow at the exit. Subsequently, a near 50% reduction in distortion is achieved. A simple change in corner geometry resulted in a 2.4% improvement in total pressure recovery.

  2. Optimism and Physical Health: A Meta-analytic Review

    PubMed Central

    Rasmussen, Heather N.; Greenhouse, Joel B.

    2010-01-01

    Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated. Purpose The purpose of this study is to conduct a meta-analytic review to determine the strength of the association between optimism and physical health. Methods The findings from 83 studies, with 108 effect sizes (ESs), were included in the analyses, using random-effects models. Results Overall, the mean ES characterizing the relationship between optimism and physical health outcomes was 0.17, p<.001. ESs were larger for studies using subjective (versus objective) measures of physical health. Subsidiary analyses were also conducted grouping studies into those that focused solely on mortality, survival, cardiovascular outcomes, physiological markers (including immune function), immune function only, cancer outcomes, outcomes related to pregnancy, physical symptoms, or pain. In each case, optimism was a significant predictor of health outcomes or markers, all p<.001. Conclusions Optimism is a significant predictor of positive physical health outcomes. PMID:19711142

  3. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  4. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  5. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  6. Solar electricity supply isolines of generation capacity and storage.

    PubMed

    Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W

    2015-03-24

    The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G-S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G-S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity.

  7. Solar electricity supply isolines of generation capacity and storage

    PubMed Central

    Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W.

    2015-01-01

    The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G−S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G−S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity. PMID:25755261

  8. Development of poly-l-lysine-coated calcium-alginate microspheres encapsulating fluorescein-labeled dextrans

    NASA Astrophysics Data System (ADS)

    Charron, Luc; Harmer, Andrea; Lilge, Lothar

    2005-09-01

    A technique to produce fluorescent cell phantom standards based on calcium alginate microspheres with encapsulated fluorescein-labeled dextrans is presented. An electrostatic ionotropic gelation method is used to create the microspheres which are then exposed to an encapsulation method using poly-l-lysine to trap the dextrans inside. Both procedures were examined in detail to find the optimal parameters producing cell phantoms meeting our requirements. Size distributions favoring 10-20 microns microspheres were obtained by varying the high voltage and needle size parameters. Typical size distributions of the samples were centered at 150 μm diameter. Neither the molecular weight nor the charge of the dextrans had a significant effect on their retention in the microspheres, though anionic dextrans were chosen to help in future capillary electrophoresis work. Increasing the exposure time of the microspheres to the poly-l-lysine solution decreased the leakage rates of fluorescein-labeled dextrans.

  9. "RCL-Pooling Assay": A Simplified Method for the Detection of Replication-Competent Lentiviruses in Vector Batches Using Sequential Pooling.

    PubMed

    Corre, Guillaume; Dessainte, Michel; Marteau, Jean-Brice; Dalle, Bruno; Fenard, David; Galy, Anne

    2016-02-01

    Nonreplicative recombinant HIV-1-derived lentiviral vectors (LV) are increasingly used in gene therapy of various genetic diseases, infectious diseases, and cancer. Before they are used in humans, preparations of LV must undergo extensive quality control testing. In particular, testing of LV must demonstrate the absence of replication-competent lentiviruses (RCL) with suitable methods, on representative fractions of vector batches. Current methods based on cell culture are challenging because high titers of vector batches translate into high volumes of cell culture to be tested in RCL assays. As vector batch size and titers are continuously increasing because of the improvement of production and purification methods, it became necessary for us to modify the current RCL assay based on the detection of p24 in cultures of indicator cells. Here, we propose a practical optimization of this method using a pairwise pooling strategy enabling easier testing of higher vector inoculum volumes. These modifications significantly decrease material handling and operator time, leading to a cost-effective method, while maintaining optimal sensibility of the RCL testing. This optimized "RCL-pooling assay" ameliorates the feasibility of the quality control of large-scale batches of clinical-grade LV while maintaining the same sensitivity.

  10. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  11. Microwave Nondestructive Evaluation of Dielectric Materials with a Metamaterial Lens

    NASA Technical Reports Server (NTRS)

    Shreiber, Daniel; Gupta, Mool; Cravey, Robin L.

    2008-01-01

    A novel microwave Nondestructive Evaluation (NDE) sensor was developed in an attempt to increase the sensitivity of the microwave NDE method for detection of defects small relative to a wavelength. The sensor was designed on the basis of a negative index material (NIM) lens. Characterization of the lens was performed to determine its resonant frequency, index of refraction, focus spot size, and optimal focusing length (for proper sample location). A sub-wavelength spot size (3 dB) of 0.48 lambda was obtained. The proof of concept for the sensor was achieved when a fiberglass sample with a 3 mm diameter through hole (perpendicular to the propagation direction of the wave) was tested. The hole was successfully detected with an 8.2 cm wavelength electromagnetic wave. This method is able to detect a defect that is 0.037 lambda. This method has certain advantages over other far field and near field microwave NDE methods currently in use.

  12. Optimizing flurbiprofen-loaded NLC by central composite factorial design for ocular delivery.

    PubMed

    Gonzalez-Mira, E; Egea, M A; Souto, E B; Calpena, A C; García, M L

    2011-01-28

    The purpose of this study was to design and optimize a new topical delivery system for ocular administration of flurbiprofen (FB), based on lipid nanoparticles. These particles, called nanostructured lipid carriers (NLC), were composed of a fatty acid (stearic acid (SA)) as the solid lipid and a mixture of Miglyol(®) 812 and castor oil (CO) as the liquid lipids, prepared by the hot high pressure homogenization method. After selecting the critical variables influencing the physicochemical characteristics of the NLC (the liquid lipid (i.e. oil) concentration with respect to the total lipid (cOil/L (wt%)), the surfactant and the flurbiprofen concentration, on particle size, polydispersity index and encapsulation efficiency), a three-factor five-level central rotatable composite design was employed to plan and perform the experiments. Morphological examination, crystallinity and stability studies were also performed to accomplish the optimization study. The results showed that increasing cOil/L (wt%) was followed by an enhanced tendency to produce smaller particles, but the liquid to solid lipid proportion should not exceed 30 wt% due to destabilization problems. Therefore, a 70:30 ratio of SA to oil (miglyol + CO) was selected to develop an optimal NLC formulation. The smaller particles obtained when increasing surfactant concentration led to the selection of 3.2 wt% of Tween(®) 80 (non-ionic surfactant). The positive effect of the increase in FB concentration on the encapsulation efficiency (EE) and its total solubilization in the lipid matrix led to the selection of 0.25 wt% of FB in the formulation. The optimal NLC showed an appropriate average size for ophthalmic administration (228.3 nm) with a narrow size distribution (0.156), negatively charged surface (-33.3 mV) and high EE (∼90%). The in vitro experiments proved that sustained release FB was achieved using NLC as drug carriers. Optimal NLC formulation did not show toxicity on ocular tissues.

  13. Optimizing flurbiprofen-loaded NLC by central composite factorial design for ocular delivery

    NASA Astrophysics Data System (ADS)

    Gonzalez-Mira, E.; Egea, M. A.; Souto, E. B.; Calpena, A. C.; García, M. L.

    2011-01-01

    The purpose of this study was to design and optimize a new topical delivery system for ocular administration of flurbiprofen (FB), based on lipid nanoparticles. These particles, called nanostructured lipid carriers (NLC), were composed of a fatty acid (stearic acid (SA)) as the solid lipid and a mixture of Miglyol® 812 and castor oil (CO) as the liquid lipids, prepared by the hot high pressure homogenization method. After selecting the critical variables influencing the physicochemical characteristics of the NLC (the liquid lipid (i.e. oil) concentration with respect to the total lipid (cOil/L (wt%)), the surfactant and the flurbiprofen concentration, on particle size, polydispersity index and encapsulation efficiency), a three-factor five-level central rotatable composite design was employed to plan and perform the experiments. Morphological examination, crystallinity and stability studies were also performed to accomplish the optimization study. The results showed that increasing cOil/L (wt%) was followed by an enhanced tendency to produce smaller particles, but the liquid to solid lipid proportion should not exceed 30 wt% due to destabilization problems. Therefore, a 70:30 ratio of SA to oil (miglyol + CO) was selected to develop an optimal NLC formulation. The smaller particles obtained when increasing surfactant concentration led to the selection of 3.2 wt% of Tween® 80 (non-ionic surfactant). The positive effect of the increase in FB concentration on the encapsulation efficiency (EE) and its total solubilization in the lipid matrix led to the selection of 0.25 wt% of FB in the formulation. The optimal NLC showed an appropriate average size for ophthalmic administration (228.3 nm) with a narrow size distribution (0.156), negatively charged surface (-33.3 mV) and high EE (~90%). The in vitro experiments proved that sustained release FB was achieved using NLC as drug carriers. Optimal NLC formulation did not show toxicity on ocular tissues.

  14. Vascularized networks with two optimized channel sizes

    NASA Astrophysics Data System (ADS)

    Wang, K.-M.; Lorente, S.; Bejan, A.

    2006-07-01

    This paper reports the development of optimal vascularization for supplying self-healing smart materials with liquid that fills and seals the cracks that may occur throughout their volume. The vascularization consists of two-dimensional grids of interconnected orthogonal channels with two hydraulic diameters (D1, D2). The smallest square loop is designed to match the size (d) of the smallest crack. The network is sealed with respect to the outside and is filled with pressurized liquid. In this work, the crack site is modelled as a small spherical volume of diameter d. When a crack is formed, fluid flows from neighbouring channels to the crack site. This volume-to-point flow is optimized using two formulations: (1) incompressible liquid from steady constant-strength sources located in every node of the grid and from sources located equidistantly on the perimeter of the vascularized body of length scale L and (2) slightly compressible liquid from an initially pressurized grid discharging in time-dependent fashion into one crack site. The flow in every channel is laminar and fully developed. The objectives are (a) to minimize the global resistance to the flow from the grid to the crack site and (b) to minimize the time of discharge from the pressurized grid to the crack site. It is shown that methods (a) and (b) yield similar results. There is an optimal ratio of channel diameters D2/D1 < 1, and it decreases as the grid fineness (L/d) increases. The global flow resistance of the grid with optimized ratio of diameters is approximately half of the resistance of the corresponding grid with one channel size (D1 = D2). The optimized ratio of diameters and the minimized global resistance depend on how the grid intersects the crack site: this effect is minor and stresses the robustness of the vascularized design.

  15. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design.

    PubMed

    Emami, J; Mohiti, H; Hamishehkar, H; Varshosaz, J

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7(®) software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the application of SLNs in pulmonary delivery system of budesonide.

  16. Optimization of the dressing parameters in cylindrical grinding based on a generalized utility function

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irina

    2016-01-01

    The existing studies, concerning the dressing process, focus on the major influence of the dressing conditions on the grinding response variables. However, the choice of the dressing conditions is often made, based on the experience of the qualified staff or using data from reference books. The optimal dressing parameters, which are only valid for the particular methods and dressing and grinding conditions, are also used. The paper presents a methodology for optimization of the dressing parameters in cylindrical grinding. The generalized utility function has been chosen as an optimization parameter. It is a complex indicator determining the economic, dynamic and manufacturing characteristics of the grinding process. The developed methodology is implemented for the dressing of aluminium oxide grinding wheels by using experimental diamond roller dressers with different grit sizes made of medium- and high-strength synthetic diamonds type ??32 and ??80. To solve the optimization problem, a model of the generalized utility function is created which reflects the complex impact of dressing parameters. The model is built based on the results from the conducted complex study and modeling of the grinding wheel lifetime, cutting ability, production rate and cutting forces during grinding. They are closely related to the dressing conditions (dressing speed ratio, radial in-feed of the diamond roller dresser and dress-out time), the diamond roller dresser grit size/grinding wheel grit size ratio, the type of synthetic diamonds and the direction of dressing. Some dressing parameters are determined for which the generalized utility function has a maximum and which guarantee an optimum combination of the following: the lifetime and cutting ability of the abrasive wheels, the tangential cutting force magnitude and the production rate of the grinding process. The results obtained prove the possibility of control and optimization of grinding by selecting particular dressing parameters.

  17. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design

    PubMed Central

    Emami, J.; Mohiti, H.; Hamishehkar, H.; Varshosaz, J.

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7® software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the application of SLNs in pulmonary delivery system of budesonide. PMID:26430454

  18. WE-DE-207B-11: Implementation of Size-Specific 3D Beam Modulation Filters On a Dedicated Breast CT Platform Using Breast Immobilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, A; Boone, J

    Purpose: To implement a 3D beam modulation filter (3D-BMF) in dedicated breast CT (bCT) and develop a method for conforming the patient’s breast to a pre-defined shape, optimizing the effects of the filter. This work expands on previous work reporting the methodology for designing a 3D-BMF that can spare unnecessary dose and improve signal equalization at the detector by preferentially filtering the beam in the thinner anterior and peripheral breast regions. Methods: Effective diameter profiles were measured for 219 segmented bCT images, grouped into volume quintiles, and averaged within each group to represent the range of breast sizes found clinically.more » These profiles were then used to generate five size-specific computational phantoms and fabricate five size-specific UHMW phantoms. Each computational phantom was utilized for designing a size-specific 3D-BMF using previously reported methods. Glandular dose values and projection images were simulated in MCNP6 with and without the 3DBMF using the system specifications of our prototype bCT scanner “Doheny”. Lastly, thermoplastic was molded around each of the five phantom sizes and used to produce a series of breast immobilizers for use in conforming the patient’s breast during bCT acquisition. Results: After incorporating the 3D-BMF, MC simulations estimated an 80% average reduction in the detector dynamic range requirements across all phantom sizes. The glandular dose was reduced on average 57% after normalizing by the number of quanta reaching the detector under the thickest region of the breast. Conclusion: A series of bCT-derived breast phantoms were used to design size-specific 3D-BMFs and breast immobilizers that can be used on the bCT platform to conform the patient’s breast and therefore optimally exploit the benefits of the 3D-BMF. Current efforts are focused on fabricating several prototype 3D-BMFs and performing phantom scans on Doheny for MC simulation validation and image quality analysis. Research reported in this paper was supported in part by the National Cancer Institute of the National Institutes of Health under award R01CA181081. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institue of Health.« less

  19. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  20. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

Top