Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments
NASA Technical Reports Server (NTRS)
Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.
1975-01-01
Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.
2004-01-01
A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.
Optomechanical study and optimization of cantilever plate dynamics
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1995-06-01
Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.
2016-09-01
The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.
Optimization and characterization of liposome formulation by mixture design.
Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel
2012-02-07
This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Fujimoto, R
Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less
Kassem, Mohamed A A; ElMeshad, Aliaa N; Fares, Ahmed R
2017-05-01
Lacidipine (LCDP) is a highly lipophilic calcium channel blocker of poor aqueous solubility leading to poor oral absorption. This study aims to prepare and optimize LCDP nanosuspensions using antisolvent sonoprecipitation technique to enhance the solubility and dissolution of LCDP. A three-factor, three-level Box-Behnken design was employed to optimize the formulation variables to obtain LCDP nanosuspension of small and uniform particle size. Formulation variables were as follows: stabilizer to drug ratio (A), sodium deoxycholate percentage (B), and sonication time (C). LCDP nanosuspensions were assessed for particle size, zeta potential, and polydispersity index. The formula with the highest desirability (0.969) was chosen as the optimized formula. The values of the formulation variables (A, B, and C) in the optimized nanosuspension were 1.5, 100%, and 8 min, respectively. Optimal LCDP nanosuspension had particle size (PS) of 273.21 nm, zeta potential (ZP) of -32.68 mV and polydispersity index (PDI) of 0.098. LCDP nanosuspension was characterized using x-ray powder diffraction, differential scanning calorimetry, and transmission electron microscopy. LCDP nanosuspension showed saturation solubility 70 times that of raw LCDP in addition to significantly enhanced dissolution rate due to particle size reduction and decreased crystallinity. These results suggest that the optimized LCDP nanosuspension could be promising to improve oral absorption of LCDP.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
NASA Astrophysics Data System (ADS)
Kazemzadeh Azad, Saeid
2018-01-01
In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Optimization of scaffold design for bone tissue engineering: A computational and experimental study.
Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R
2014-04-01
In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Hwang, Ho Sik; Cho, Kyong Jin; Rand, Gabriel; Chuck, Roy S; Kwon, Ji Won
2018-06-07
In our study we describe a method that optimizes size of excision and autografting for primary pterygia along with the use of intraoperative MMC and fibrin glue. Our objective is to propose a simple, optimizedpterygium surgical technique with excellent aesthetic outcomes and low rates of recurrence and otheradverse events. Retrospective chart review of 78 consecutive patients with stage III primary pterygia who underwent an optimal excision technique by three experienced surgeons. The technique consisted of removal of the pterygium head, excision of the pterygium body and Tenon's layer limited in proportion to the length of the head, application of intraoperative mitomycin C to the defect, harvest of superior bulbar limbal conjunctival graft, adherence of graft with fibrin glue. Outcomes included operative time, follow up period, pterygium recurrence, occurrences of incorrectly sized grafts, and other complications. All patients were followed up for more than a year. Of the 78 patients, there were 2 cases of pterygium recurrence (2.6%). There was one case of wound dehiscence secondary to small-sized donor conjunctivaand one case of over-sized donor conjunctiva, neither of which required surgical correction. There were no toxic complications associated with the use of mitomycin C. Correlating the excision of the pterygium body and underlying Tenon's layer to the length of the pterygium head, along with the use intraoperative mitomycin C, limbal conjunctival autografting, and fibrin adhesionresulted in excellent outcomes with a low rate of recurrence for primary pterygia.
Formal optimization of hovering performance using free wake lifting surface theory
NASA Technical Reports Server (NTRS)
Chung, S. Y.
1986-01-01
Free wake techniques for performance prediction and optimization of hovering rotor are discussed. The influence functions due to vortex ring, vortex cylinder, and source or vortex sheets are presented. The vortex core sizes of rotor wake vortices are calculated and their importance is discussed. Lifting body theory for finite thickness body is developed for pressure calculation, and hence performance prediction of hovering rotors. Numerical optimization technique based on free wake lifting line theory is presented and discussed. It is demonstrated that formal optimization can be used with the implicit and nonlinear objective or cost function such as the performance of hovering rotors as used in this report.
Optimal Control Surface Layout for an Aeroservoelastic Wingbox
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2017-01-01
This paper demonstrates a technique for locating the optimal control surface layout of an aeroservoelastic Common Research Model wingbox, in the context of maneuver load alleviation and active utter suppression. The combinatorial actuator layout design is solved using ideas borrowed from topology optimization, where the effectiveness of a given control surface is tied to a layout design variable, which varies from zero (the actuator is removed) to one (the actuator is retained). These layout design variables are optimized concurrently with a large number of structural wingbox sizing variables and control surface actuation variables, in order to minimize the sum of structural weight and actuator weight. Results are presented that demonstrate interdependencies between structural sizing patterns and optimal control surface layouts, for both static and dynamic aeroelastic physics.
Optimal synthesis and characterization of Ag nanofluids by electrical explosion of wires in liquids
2011-01-01
Silver nanoparticles were produced by electrical explosion of wires in liquids with no additive. In this study, we optimized the fabrication method and examined the effects of manufacturing process parameters. Morphology and size of the Ag nanoparticles were determined using transmission electron microscopy and field-emission scanning electron microscopy. Size and zeta potential were analyzed using dynamic light scattering. A response optimization technique showed that optimal conditions were achieved when capacitance was 30 μF, wire length was 38 mm, liquid volume was 500 mL, and the liquid type was deionized water. The average Ag nanoparticle size in water was 118.9 nm and the zeta potential was -42.5 mV. The critical heat flux of the 0.001-vol.% Ag nanofluid was higher than pure water. PMID:21711757
Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm
Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed
2008-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581
Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.
Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed
2004-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
Optimization of Turbine Blade Design for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Shyy, Wei
1998-01-01
To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.
van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W
2014-12-22
Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; Ep Mundhofir, Farmaditya; Mh Faradz, Sultana; Hisatome, Ichiro
2017-03-01
High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100-400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1 .
Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; EP Mundhofir, Farmaditya; MH Faradz, Sultana; Hisatome, Ichiro
2017-01-01
Background High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Methods Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Results Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100–400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. Conclusion In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1. PMID:28331418
NASA Astrophysics Data System (ADS)
Hu, K. M.; Li, Hua
2018-07-01
A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.
Optimizing Aspect-Oriented Mechanisms for Embedded Applications
NASA Astrophysics Data System (ADS)
Hundt, Christine; Stöhr, Daniel; Glesner, Sabine
As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.
Focus characterization at an X-ray free-electron laser by coherent scattering and speckle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikorski, Marcin; Song, Sanghoon; Schropp, Andreas
2015-04-14
X-ray focus optimization and characterization based on coherent scattering and quantitative speckle size measurements was demonstrated at the Linac Coherent Light Source. Its performance as a single-pulse free-electron laser beam diagnostic was tested for two typical focusing configurations. The results derived from the speckle size/shape analysis show the effectiveness of this technique in finding the focus' location, size and shape. In addition, its single-pulse compatibility enables users to capture pulse-to-pulse fluctuations in focus properties compared with other techniques that require scanning and averaging.
Development of an improved high efficiency thin silicon solar cell
NASA Technical Reports Server (NTRS)
Lindmayer, J.
1978-01-01
Efforts were concerned with optimizing techniques for thinning silicon slices in NaOH etches, initial investigations of surface texturing, variation of furnace treatments to improve cell efficiency, initial efforts on optimization of gridline and cell sizes and Pilot Line fabrication of quantities of 2 cm x 2 cm 50 micron thick cells.
Development of a fast and feasible spectrum modeling technique for flattening filter free beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Woong; Bush, Karl; Mok, Ed
Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less
Jan, Show-Li; Shieh, Gwowen
2016-08-31
The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Investigating the effects of PDC cutters geometry on ROP using the Taguchi technique
NASA Astrophysics Data System (ADS)
Jamaludin, A. A.; Mehat, N. M.; Kamaruddin, S.
2017-10-01
At times, the polycrystalline diamond compact (PDC) bit’s performance dropped and affects the rate of penetration (ROP). The objective of this project is to investigate the effect of PDC cutter geometry and optimize them. An intensive study in cutter geometry would further enhance the ROP performance. The relatively extended analysis was carried out and four significant geometry factors have been identified that directly improved the ROP. Cutter size, back rake angle, side rake angle and chamfer angle are the stated geometry factors. An appropriate optimization technique that effectively controls all influential geometry factors during cutters manufacturing is introduced and adopted in this project. By adopting L9 Taguchi OA, simulation experiment is conducted by using explicit dynamics finite element analysis. Through a structure Taguchi analysis, ANOVA confirms that the most significant geometry to improve ROP is cutter size (99.16% percentage contribution). The optimized cutter is expected to drill with high ROP that can reduce the rig time, which in its turn, may reduce the total drilling cost.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Jangdey, Manmohan Singh; Gupta, Anshita; Saraf, Shailendra; Saraf, Swarnlata
2017-11-01
The aim of this work is to apply Box-Behnken design to optimize the transfersomes were formulated by modified rotary evaporation sonication technique using surfactant Tween 80. The response surface methodology was used having three-factored with three levels. The prepared formulations were characterized for vesicle shape, size, entrapment efficiency (%), stability, and in vitro permeation. The result showed that drug entrapment of 84.24% with average vesicle size of 35.41 nm and drug loading of 8.042%. Thus, optimized formulation was found good stability and is a promising approach to improve the permeability of apigenin in sustained release for prolonged period of time.
Optimized random phase only holograms.
Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto
2018-02-15
We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Integrated topology and shape optimization in structural design
NASA Technical Reports Server (NTRS)
Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.
1990-01-01
Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.
NASA Technical Reports Server (NTRS)
Adams, J. R.; Hawley, S. W.; Peterson, G. R.; Salinger, S. S.; Workman, R. A.
1971-01-01
A hardware and software specification covering requirements for the computer enhancement of structural weld radiographs was considered. Three scanning systems were used to digitize more than 15 weld radiographs. The performance of these systems was evaluated by determining modulation transfer functions and noise characteristics. Enhancement techniques were developed and applied to the digitized radiographs. The scanning parameters of spot size and spacing and film density were studied to optimize the information content of the digital representation of the image.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
Yoo, Do Guen; Lee, Ho Min; Sadollah, Ali; Kim, Joong Hoon
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply.
Lee, Ho Min; Sadollah, Ali
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply. PMID:25874252
Application of cokriging techniques for the estimation of hail size
NASA Astrophysics Data System (ADS)
Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier
2018-01-01
There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.
Effective Padding of Multi-Dimensional Arrays to Avoid Cache Conflict Misses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Changwan; Bao, Wenlei; Cohen, Albert
Caches are used to significantly improve performance. Even with high degrees of set-associativity, the number of accessed data elements mapping to the same set in a cache can easily exceed the degree of associativity, causing conflict misses and lowered performance, even if the working set is much smaller than cache capacity. Array padding (increasing the size of array dimensions) is a well known optimization technique that can reduce conflict misses. In this paper, we develop the first algorithms for optimal padding of arrays for a set associative cache for arbitrary tile sizes, In addition, we develop the first solution tomore » padding for nested tiles and multi-level caches. The techniques are in implemented in PAdvisor tool. Experimental results with multiple benchmarks demonstrate significant performance improvement from use of PAdvisor for padding.« less
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelsen, H. A.; Schulz, C.; Smallwood, G. J.
The understanding of soot formation in combustion processes and the optimization of practical combustion systems require in situ measurement techniques that can provide important characteristics, such as particle concentrations and sizes, under a variety of conditions. Of equal importance are techniques suitable for characterizing soot particles produced from incomplete combustion and emitted into the environment. Also, the production of engineered nanoparticles, such as carbon blacks, may benefit from techniques that allow for online monitoring of these processes.
Performance optimization of an MHD generator with physical constraints
NASA Technical Reports Server (NTRS)
Pian, C. C. P.; Seikel, G. R.; Smith, J. M.
1979-01-01
A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.
Optimization of printing techniques for electrochemical biosensors
NASA Astrophysics Data System (ADS)
Zainuddin, Ahmad Anwar; Mansor, Ahmad Fairuzabadi Mohd; Rahim, Rosminazuin Ab; Nordin, Anis Nurashikin
2017-03-01
Electrochemical biosensors show great promise for point-of-care applications due to their low cost, portability and compatibility with microfluidics. The miniature size of these sensors provides advantages in terms of sensitivity, specificity and allows them to be mass produced in arrays. The most reliable fabrication technique for these sensors is lithography followed by metal deposition using sputtering or chemical vapor deposition techniques. This technique which is usually done in the cleanroom requires expensive masking followed by deposition. Recently, cheaper printing techniques such as screen-printing and ink-jet printing have become popular due to its low cost, ease of fabrication and mask-less method. In this paper, two different printing techniques namely inkjet and screen printing are demonstrated for an electrochemical biosensor. For ink-jet printing technique, optimization of key printing parameters, such as pulse voltages, drop spacing and waveform setting, in-house temperature and cure annealing for obtaining the high quality droplets, are discussed. These factors are compared with screen-printing parameters such as mesh size, emulsion thickness, minimum spacing of lines and curing times. The reliability and reproducibility of the sensors are evaluated using scotch tape test, resistivity and profile-meter measurements. It was found that inkjet printing is superior because it is mask-less, has minimum resolution of 100 µm compared to 200 µm for screen printing and higher reproducibility rate of 90% compared to 78% for screen printing.
NASA Technical Reports Server (NTRS)
Henderson, M. L.
1979-01-01
The benefits to high lift system maximum life and, alternatively, to high lift system complexity, of applying analytic design and analysis techniques to the design of high lift sections for flight conditions were determined and two high lift sections were designed to flight conditions. The influence of the high lift section on the sizing and economics of a specific energy efficient transport (EET) was clarified using a computerized sizing technique and an existing advanced airplane design data base. The impact of the best design resulting from the design applications studies on EET sizing and economics were evaluated. Flap technology trade studies, climb and descent studies, and augmented stability studies are included along with a description of the baseline high lift system geometry, a calculation of lift and pitching moment when separation is present, and an inverse boundary layer technique for pressure distribution synthesis and optimization.
Lean and Efficient Software: Whole-Program Optimization of Executables
2015-09-30
libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.
2013-01-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507
NASA Astrophysics Data System (ADS)
Tsou, Jia-Chi; Hejazi, Seyed Reza; Rasti Barzoki, Morteza
2012-12-01
The economic production quantity (EPQ) model is a well-known and commonly used inventory control technique. However, the model is built on an unrealistic assumption that all the produced items need to be of perfect quality. Having relaxed this assumption, some researchers have studied the effects of the imperfect products on the inventory control techniques. This article, thus, attempts to develop an EPQ model with continuous quality characteristic and rework. To this end, this study assumes that a produced item follows a general distribution pattern, with its quality being perfect, imperfect or defective. The analysis of the model developed indicates that there is an optimal lot size, which generates minimum total cost. Moreover, the results show that the optimal lot size of the model equals that of the classical EPQ model in case imperfect quality percentage is zero or even close to zero.
Qu, Haiou; Wang, Jiang; Wu, Yong; Zheng, Jiwen; Krishnaiah, Yellela S R; Absar, Mohammad; Choi, Stephanie; Ashraf, Muhammad; Cruz, Celia N; Xu, Xiaoming
2018-03-01
Commonly used characterization techniques such as cryogenic-transmission electron microscopy (cryo-TEM) and batch-mode dynamic light scattering (DLS) are either time consuming or unable to offer high resolution to discern the poly-dispersity of complex drug products like cyclosporine ophthalmic emulsions. Here, a size-based separation and characterization method for globule size distribution using an asymmetric flow field flow fractionation (AF4) is reported for comparative assessment of cyclosporine ophthalmic emulsion drug products (model formulation) with a wide size span and poly-dispersity. Cyclosporine emulsion formulations that are qualitatively (Q1) and quantitatively (Q2) the same as Restasis® were prepared in house with varying manufacturing processes and analyzed using the optimized AF4 method. Based on our results, the commercially available cyclosporine ophthalmic emulsion has a globule size span from 30 nm to a few hundred nanometers with majority smaller than 100 nm. The results with in-house formulations demonstrated the sensitivity of AF4 in determining the differences in the globule size distribution caused by the changes to the manufacturing process. It is concluded that the optimized AF4 is a potential analytical technique for comprehensive understanding of the microstructure and assessment of complex emulsion drug products with high poly-dispersity. Published by Elsevier B.V.
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
Mehrabanian, Mehran; Nasr-Esfahani, Mojtaba
2011-01-01
Nanohydroxyapatite (n-HA)/nylon 6,6 composite scaffolds were produced by means of the salt-leaching/solvent casting technique. NaCl with a distinct range size was used with the aim of optimizing the pore network. Composite powders with different n-HA contents (40%, 60%) for scaffold fabrication were synthesized and tested. The composite scaffolds thus obtained were characterized for their microstructure, mechanical stability and strength, and bioactivity. The microstructure of the composite scaffolds possessed a well-developed interconnected porosity with approximate optimal pore size ranging from 200 to 500 μm, ideal for bone regeneration and vascularization. The mechanical properties of the composite scaffolds were evaluated by compressive strength and modulus tests, and the results confirmed their similarity to cortical bone. To characterize bioactivity, the composite scaffolds were immersed in simulated body fluid for different lengths of time and results monitored by scanning electron microscopy and energy dispersive X-ray microanalysis to determine formation of an apatite layer on the scaffold surface. PMID:21904455
Configuration-shape-size optimization of space structures by material redistribution
NASA Technical Reports Server (NTRS)
Vandenbelt, D. N.; Crivelli, L. A.; Felippa, C. A.
1993-01-01
This project investigates the configuration-shape-size optimization (CSSO) of orbiting and planetary space structures. The project embodies three phases. In the first one the material-removal CSSO method introduced by Kikuchi and Bendsoe (KB) is further developed to gain understanding of finite element homogenization techniques as well as associated constrained optimization algorithms that must carry along a very large number (thousands) of design variables. In the CSSO-KB method an optimal structure is 'carved out' of a design domain initially filled with finite elements, by allowing perforations (microholes) to develop, grow and merge. The second phase involves 'materialization' of space structures from the void, thus reversing the carving process. The third phase involves analysis of these structures for construction and operational constraints, with emphasis in packaging and deployment. The present paper describes progress in selected areas of the first project phase and the start of the second one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed
Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less
Improving alpine-region spectral unmixing with optimal-fit snow endmembers
NASA Technical Reports Server (NTRS)
Painter, Thomas H.; Roberts, Dar A.; Green, Robert O.; Dozier, Jeff
1995-01-01
Surface albedo and snow-covered-area (SCA) are crucial inputs to the hydrologic and climatologic modeling of alpine and seasonally snow-covered areas. Because the spectral albedo and thermal regime of pure snow depend on grain size, areal distribution of snow grain size is required. Remote sensing has been shown to be an effective (and necessary) means of deriving maps of grain size distribution and snow-covered-area. Developed here is a technique whereby maps of grain size distribution improve estimates of SCA from spectral mixture analysis with AVIRIS data.
Application of Box-Behnken design to prepare gentamicin-loaded calcium carbonate nanoparticles.
Maleki Dizaj, Solmaz; Lotfipour, Farzaneh; Barzegar-Jalali, Mohammad; Zarrintan, Mohammad-Hossein; Adibkia, Khosro
2016-09-01
The aim of this research was to prepare and optimize calcium carbonate (CaCO3) nanoparticles as carriers for gentamicin sulfate. A chemical precipitation method was used to prepare the gentamicin sulfate-loaded CaCO3 nanoparticles. A 3-factor, 3-level Box-Behnken design was used for the optimization procedure, with the molar ratio of CaCl2: Na2CO3 (X1), the concentration of drug (X2), and the speed of homogenization (X3) as the independent variables. The particle size and entrapment efficiency were considered as response variables. Mathematical equations and response surface plots were used, along with the counter plots, to relate the dependent and independent variables. The results indicated that the speed of homogenization was the main variable contributing to particle size and entrapment efficiency. The combined effect of all three independent variables was also evaluated. Using the response optimization design, the optimized Xl-X3 levels were predicted. An optimized formulation was then prepared according to these levels, resulting in a particle size of 80.23 nm and an entrapment efficiency of 30.80%. It was concluded that the chemical precipitation technique, together with the Box-Behnken experimental design methodology, could be successfully used to optimize the formulation of drug-incorporated calcium carbonate nanoparticles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walz-Flannigan, A; Lucas, J; Buchanan, K
Purpose: Manual technique selection in radiography is needed for imaging situations where there is difficulty in proper positioning for AEC, prosthesis, for non-bucky imaging, or for guiding image repeats. Basic information about how to provide consistent image signal and contrast for various kV and tissue thickness is needed to create manual technique charts, and relevant for physicists involved in technique chart optimization. Guidance on technique combinations and rules-of-thumb to provide consistent image signal still in use today are based on measurements with optical density of screen-film combinations and older generation x-ray systems. Tools such as a kV-scale chart can bemore » useful to know how to modify mAs when kV is changed in order to maintain consistent image receptor signal level. We evaluate these tools for modern equipment for use in optimizing proper size scaled techniques. Methods: We used a water phantom to measure calibrated signal change for CR and DR (with grid) for various beam energies. Tube current values were calculated that would yield a consistent image signal response. Data was fit to provide sufficient granularity of detail to compose technique-scale chart. Tissue thickness approximated equivalence to 80% of water depth. Results: We created updated technique-scale charts, providing mAs and kV combinations to achieve consistent signal for CR and DR for various tissue equivalent thicknesses. We show how this information can be used to create properly scaled size-based manual technique charts. Conclusion: Relative scaling of mAs and kV for constant signal (i.e. the shape of the curve) appears substantially similar between film-screen and CR/DR. This supports the notion that image receptor related differences are minor factors for relative (not absolute) changes in mAs with varying kV. However, as demonstrated creation of these difficult to find detailed technique-scales are useful tools for manual chart optimization.« less
Holmes, W J M; Timmons, M J; Kauser, S
2015-10-01
Techniques used to estimate implant size for primary breast augmentation have evolved since the 1970s. Currently no consensus exists on the optimal method to select implant size for primary breast augmentation. In 2013 we asked United Kingdom consultant plastic surgeons who were full members of BAPRAS or BAAPS what was their technique for implant size selection for primary aesthetic breast augmentation. We also asked what was the range of implant sizes they commonly used. The answers to question one were grouped into four categories: experience, measurements, pre-operative external sizers and intra-operative sizers. The response rate was 46% (164/358). Overall, 95% (153/159) of all respondents performed some form of pre-operative assessment, the others relied on "experience" only. The most common technique for pre-operative assessment was by external sizers (74%). Measurements were used by 57% of respondents and 3% used intra-operative sizers only. A combination of measurements and sizers was used by 34% of respondents. The most common measurements were breast base (68%), breast tissue compliance (19%), breast height (15%), and chest diameter (9%). The median implant size commonly used in primary breast augmentation was 300cc. Pre-operative external sizers are the most common technique used by UK consultant plastic surgeons to select implant size for primary breast augmentation. We discuss the above findings in relation to the evolution of pre-operative planning techniques for breast augmentation. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Matrix Dissolution Techniques Applied to Extract and Quantify Precipitates from a Microalloyed Steel
NASA Astrophysics Data System (ADS)
Lu, Junfang; Wiskel, J. Barry; Omotoso, Oladipo; Henein, Hani; Ivey, Douglas G.
2011-07-01
Microalloyed steels possess good strength and toughness, as well as excellent weldability; these attributes are necessary for oil and gas pipelines in northern climates. These properties are attributed in part to the presence of nanosized carbide and carbonitride precipitates. To understand the strengthening mechanisms and to optimize the strengthening effects, it is necessary to quantify the size distribution, volume fraction, and chemical speciation of these precipitates. However, characterization techniques suitable for quantifying fine precipitates are limited because of their fine sizes, wide particle size distributions, and low volume fractions. In this article, two matrix dissolution techniques have been developed to extract precipitates from a Grade100 (yield strength of 690 MPa) microalloyed steel. Relatively large volumes of material can be analyzed, and statistically significant quantities of precipitates of different sizes are collected. Transmission electron microscopy (TEM) and X-ray diffraction (XRD) are combined to analyze the chemical speciation of these precipitates. Rietveld refinement of XRD patterns is used to quantify fully the relative amounts of the precipitates. The size distribution of the nanosized precipitates is quantified using dark-field imaging in the TEM.
NASA Astrophysics Data System (ADS)
Sengupta, Avery; Gupta, Surashree Sen; Ghosh, Mahua
2013-03-01
The purpose of the present study was to obtain optimal processing for preparation of uniform-sized nanoemulsion of conjugated linolenic acid (CLnA) rich oil to increase the oxidative stability of CLnA by using a high-speed disperser (HSD) and ultrasonication. The emulsifiers used were egg phospholipid and soya protein isolate. The effects of oil concentration [0.05 to 1.25 % (w/w)], emulsifier ratio [0.1:0.9 to 0.9:0.1 (phospholipid:protein)], speed of the HSD (2,000 to 12,000 rpm) and times of HSD and sonication treatments (10 to 50 min) were observed. Optimization was performed with and without response surface methodology (RSM). The optimum compositional variables i.e. concentration of oil was 1 % and phospholipid:protein molar ratio was 0.5:0.5. Maximum size reduction occurred at 10,000 rpm speed of HSD. HSD should be administered for 40 min followed by 40 min ultrasonication. The range of the size of the droplets in the nanoemulsion was between 173 ± 1.20 and 183 ± 0.94 nm. Nanoemulsion is a size reduction technique where the oil present in the emulsion can be easily stabilized which increases the shelf-life of the oil. The present study derived the reaction parameters were optimized using RSM to produce nanoemulsion of CLnA rich oils of minimum size to obtain maximum stability.
Design optimization of steel frames using an enhanced firefly algorithm
NASA Astrophysics Data System (ADS)
Carbas, Serdar
2016-12-01
Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.
Low cost Ku-band earth terminals for voice/data/facsimile
NASA Technical Reports Server (NTRS)
Kelley, R. L.
1977-01-01
A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.
NASA Astrophysics Data System (ADS)
Asaithambi, Sasikumar; Rajappa, Muthaiah
2018-05-01
In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.
Asaithambi, Sasikumar; Rajappa, Muthaiah
2018-05-01
In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.
Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.
2009-04-01
Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less
Ahmad, Zaki Uddin; Chao, Bing; Konggidinata, Mas Iwan; Lian, Qiyu; Zappi, Mark E; Gang, Daniel Dianchen
2018-04-27
Numerous research works have been devoted in the adsorption area using experimental approaches. All these approaches are based on trial and error process and extremely time consuming. Molecular simulation technique is a new tool that can be used to design and predict the performance of an adsorbent. This research proposed a simulation technique that can greatly reduce the time in designing the adsorbent. In this study, a new Rhombic ordered mesoporous carbon (OMC) model is proposed and constructed with various pore sizes and oxygen contents using Materials Visualizer Module to optimize the structure of OMC for resorcinol adsorption. The specific surface area, pore volume, small angle X-ray diffraction pattern, and resorcinol adsorption capacity were calculated by Forcite and Sorption module in Materials Studio Package. The simulation results were validated experimentally through synthesizing OMC with different pore sizes and oxygen contents prepared via hard template method employing SBA-15 silica scaffold. Boric acid was used as the pore expanding reagent to synthesize OMC with different pore sizes (from 4.6 to 11.3 nm) and varying oxygen contents (from 11.9% to 17.8%). Based on the simulation and experimental validation, the optimal pore size was found to be 6 nm for maximum adsorption of resorcinol. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Scotti, S. J.
1989-01-01
The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.
Michelsen, H. A.; Schulz, C.; Smallwood, G. J.; ...
2015-09-09
The understanding of soot formation in combustion processes and the optimization of practical combustion systems require in situ measurement techniques that can provide important characteristics, such as particle concentrations and sizes, under a variety of conditions. Of equal importance are techniques suitable for characterizing soot particles produced from incomplete combustion and emitted into the environment. Also, the production of engineered nanoparticles, such as carbon blacks, may benefit from techniques that allow for online monitoring of these processes.
Kollipara, Sivacharan; Bende, Girish; Movva, Snehalatha; Saha, Ranendra
2010-11-01
Polymeric carrier systems of paclitaxel (PCT) offer advantages over only available formulation Taxol® in terms of enhancing therapeutic efficacy and eliminating adverse effects. The objective of the present study was to prepare poly (lactic-co-glycolic acid) nanoparticles containing PCT using emulsion solvent evaporation technique. Critical factors involved in the processing method were identified and optimized by scientific, efficient rotatable central composite design aiming at low mean particle size and high entrapment efficiency. Twenty different experiments were designed and each formulation was evaluated for mean particle size and entrapment efficiency. The optimized formulation was evaluated for in vitro drug release, and absorption characteristics were studied using in situ rat intestinal permeability study. Amount of polymer and duration of ultrasonication were found to have significant effect on mean particle size and entrapment efficiency. First-order interactions of amount of miglyol with amount of polymer were significant in case of mean particle size, whereas second-order interactions of polymer were significant in mean particle size and entrapment efficiency. The developed quadratic model showed high correlation (R(2) > 0.85) between predicted response and studied factors. The optimized formulation had low mean particle size (231.68 nm) and high entrapment efficiency (95.18%) with 4.88% drug content. The optimized formulation showed controlled release of PCT for more than 72 hours. In situ absorption study showed faster and enhanced extent of absorption of PCT from nanoparticles compared to pure drug. The poly (lactic-co-glycolic acid) nanoparticles containing PCT may be of clinical importance in enhancing its oral bioavailability.
NASA Astrophysics Data System (ADS)
Fadzilah, R. Hanum; Sobhana, B. Arianto; Mahfud, M.
2015-12-01
Microwave-assisted extraction technique was employed to extract essential oil from ginger. The optimal condition for microwave assisted extraction of ginger were determined by resposnse surface methodology. A central composite rotatable design was applied to evaluate the effects of three independent variables. The variables is were microwave power 400 - 800W as X1, feed solvent ratio of 0.33 -0.467 as X2 and feed size 1 cm, 0.25 cm and less than 0.2 cm as X3. The correlation analysis of mathematical modelling indicated that quadratic polynomial could be employed to optimize microwave assisted extraction of ginger. The optimal conditions to obtain highest yield of essential oil were : microwave power 597,163 W : feed solvent ratio and size of feed less than 0.2 cm.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Raghuwanshi, Sanjeev Kumar
2016-06-01
The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.
Hooda, Aashima; Nanda, Arun; Jain, Manish; Kumar, Vikash; Rathee, Permender
2012-12-01
The current study involves the development and optimization of their drug entrapment and ex vivo bioadhesion of multiunit chitosan based floating system containing Ranitidine HCl by ionotropic gelation method for gastroretentive delivery. Chitosan being cationic, non-toxic, biocompatible, biodegradable and bioadhesive is frequently used as a material for drug delivery systems and used to transport a drug to an acidic environment where it enhances the transport of polar drugs across epithelial surfaces. The effect of various process variables like drug polymer ratio, concentration of sodium tripolyphosphate and stirring speed on various physiochemical properties like drug entrapment efficiency, particle size and bioadhesion was optimized using central composite design and analyzed using response surface methodology. The observed responses were coincided well with the predicted values given by the optimization technique. The optimized microspheres showed drug entrapment efficiency of 74.73%, particle size 707.26 μm and bioadhesion 71.68% in simulated gastric fluid (pH 1.2) after 8 h with floating lag time 40s. The average size of all the dried microspheres ranged from 608.24 to 720.80 μm. The drug entrapment efficiency of microspheres ranged from 41.67% to 87.58% and bioadhesion ranged from 62% to 86%. Accelerated stability study was performed on optimized formulation as per ICH guidelines and no significant change was found in drug content on storage. Copyright © 2012 Elsevier B.V. All rights reserved.
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Price of Fairness in Kidney Exchange
2014-05-01
solver uses branch-and-price, a technique that proves optimality by in- crementally generating only a small part of the model during tree search [8...factors like fail- ure probability and chain position, as in the probabilistic model ). We will use this multiplicative re-weighting in our experiments in...Table 2 gives the average loss in efficiency for each of these models over multiple generated pool sizes, with 40 runs per pool size per model , under
Generating high-quality single droplets for optical particle characterization with an easy setup
NASA Astrophysics Data System (ADS)
Xu, Jie; Ge, Baozhen; Meng, Rui
2018-06-01
The high-performance and micro-sized single droplet is significant for optical particle characterization. We develop a single-droplet generator (SDG) based on a piezoelectric inkjet technique with advantages of low cost and easy setup. By optimizing the pulse parameters, we achieve various size single droplets. Further investigations reveal that SDG generates single droplets of high quality, demonstrating good sphericity, monodispersity and a stable length of several millimeters.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.
2016-03-01
The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.
Gdowski, Andrew; Johnson, Kaitlyn; Shah, Sunil; Gryczynski, Ignacy; Vishwanatha, Jamboor; Ranjan, Amalendu
2018-02-12
The process of optimization and fabrication of nanoparticle synthesis for preclinical studies can be challenging and time consuming. Traditional small scale laboratory synthesis techniques suffer from batch to batch variability. Additionally, the parameters used in the original formulation must be re-optimized due to differences in fabrication techniques for clinical production. Several low flow microfluidic synthesis processes have been reported in recent years for developing nanoparticles that are a hybrid between polymeric nanoparticles and liposomes. However, use of high flow microfluidic synthetic techniques has not been described for this type of nanoparticle system, which we will term as nanolipomer. In this manuscript, we describe the successful optimization and functional assessment of nanolipomers fabricated using a microfluidic synthesis method under high flow parameters. The optimal total flow rate for synthesis of these nanolipomers was found to be 12 ml/min and flow rate ratio 1:1 (organic phase: aqueous phase). The PLGA polymer concentration of 10 mg/ml and a DSPE-PEG lipid concentration of 10% w/v provided optimal size, PDI and stability. Drug loading and encapsulation of a representative hydrophobic small molecule drug, curcumin, was optimized and found that high encapsulation efficiency of 58.8% and drug loading of 4.4% was achieved at 7.5% w/w initial concentration of curcumin/PLGA polymer. The final size and polydispersity index of the optimized nanolipomer was 102.11 nm and 0.126, respectively. Functional assessment of uptake of the nanolipomers in C4-2B prostate cancer cells showed uptake at 1 h and increased uptake at 24 h. The nanolipomer was more effective in the cell viability assay compared to free drug. Finally, assessment of in vivo retention in mice of these nanolipomers revealed retention for up to 2 h and were completely cleared at 24 h. In this study, we have demonstrated that a nanolipomer formulation can be successfully synthesized and easily scaled up through a high flow microfluidic system with optimal characteristics. The process of developing nanolipomers using this methodology is significant as the same optimized parameters used for small batches could be translated into manufacturing large scale batches for clinical trials through parallel flow systems.
NASA Technical Reports Server (NTRS)
Burrows, R. R.
1972-01-01
A particular type of three-impulse transfer between two circular orbits is analyzed. The possibility of three plane changes is recognized, and the problem is to optimally distribute these plane changes to minimize the sum of the individual impulses. Numerical difficulties and their solution are discussed. Numerical results obtained from a conjugate gradient technique are presented for both the case where the individual plane changes are unconstrained and for the case where they are constrained. Possibly not unexpectedly, multiple minima are found. The techniques presented could be extended to the finite burn case, but primarily the contents are addressed to preliminary mission design and vehicle sizing.
NASA Astrophysics Data System (ADS)
Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth
2017-04-01
In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.
NASA Technical Reports Server (NTRS)
Skillen, Michael D.; Crossley, William A.
2008-01-01
This report documents a series of investigations to develop an approach for structural sizing of various morphing wing concepts. For the purposes of this report, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and / or increasing aspect ratio by as much as 200% from the lowest possible value. These significant changes in geometry mean that the underlying load-bearing structure changes geometry. While most finite element analysis packages provide some sort of structural optimization capability, these codes are not amenable to making significant changes in the stiffness matrix to reflect the large morphing wing planform changes. The investigations presented here use a finite element code capable of aeroelastic analysis in three different optimization approaches -a "simultaneous analysis" approach, a "sequential" approach, and an "aggregate" approach.
NASA Technical Reports Server (NTRS)
Schmit, Ryan
2010-01-01
To develop New Flow Control Techniques: a) Knowledge of the Flow Physics with and without control. b) How does Flow Control Effect Flow Physics (What Works to Optimize the Design?). c) Energy or Work Efficiency of the Control Technique (Cost - Risk - Benefit Analysis). d) Supportability, e.g. (size of equipment, computational power, power supply) (Allows Designer to include Flow Control in Plans).
Graphic design of pinhole cameras
NASA Technical Reports Server (NTRS)
Edwards, H. B.; Chu, W. P.
1979-01-01
The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.
NASA Astrophysics Data System (ADS)
Sengbusch, Evan R.
Physical properties of proton interactions in matter give them a theoretical advantage over photons in radiation therapy for cancer treatment, but they are seldom used relative to photons. The primary barriers to wider acceptance of proton therapy are the technical feasibility, size, and price of proton therapy systems. Several aspects of the proton therapy landscape are investigated, and new techniques for treatment planning, optimization, and beam delivery are presented. The results of these investigations suggest a means by which proton therapy can be delivered more efficiently, effectively, and to a much larger proportion of eligible patients. An analysis of the existing proton therapy market was performed. Personal interviews with over 30 radiation oncology leaders were conducted with regard to the current and future use of proton therapy. In addition, global proton therapy market projections are presented. The results of these investigations serve as motivation and guidance for the subsequent development of treatment system designs and treatment planning, optimization, and beam delivery methods. A major factor impacting the size and cost of proton treatment systems is the maximum energy of the accelerator. Historically, 250 MeV has been the accepted value, but there is minimal quantitative evidence in the literature that supports this standard. A retrospective study of 100 patients is presented that quantifies the maximum proton kinetic energy requirements for cancer treatment, and the impact of those results with regard to treatment system size, cost, and neutron production is discussed. This study is subsequently expanded to include 100 cranial stereotactic radiosurgery (SRS) patients, and the results are discussed in the context of a proposed dedicated proton SRS treatment system. Finally, novel proton therapy optimization and delivery techniques are presented. Algorithms are developed that optimize treatment plans over beam angle, spot size, spot spacing, beamlet weight, the number of delivered beamlets, and the number of delivery angles. These methods are evaluated via treatment planning studies including left-sided whole breast irradiation, lung stereotactic body radiotherapy, nasopharyngeal carcinoma, and whole brain radiotherapy with hippocampal avoidance. Improvements in efficiency and efficacy relative to traditional proton therapy and intensity modulated photon radiation therapy are discussed.
NASA Astrophysics Data System (ADS)
Mazoyer, J.; Pueyo, L.; N'Diaye, M.; Fogarty, K.; Zimmerman, N.; Soummer, R.; Shaklan, S.; Norman, C.
2018-01-01
High-contrast imaging and spectroscopy provide unique constraints for exoplanet formation models as well as for planetary atmosphere models. Instrumentation techniques in this field have greatly improved over the last two decades, with the development of stellar coronagraphy, in parallel with specific methods of wavefront sensing and control. Next generation space- and ground-based telescopes will enable the characterization of cold solar-system-like planets for the first time and maybe even in situ detection of bio-markers. However, the growth of primary mirror diameters, necessary for these detections, comes with an increase of their complexity (segmentation, secondary mirror features). These discontinuities in the aperture can greatly limit the performance of coronagraphic instruments. In this context, we introduced a new technique, Active Correction of Aperture Discontinuities-Optimized Stroke Minimization (ACAD-OSM), to correct for the diffractive effects of aperture discontinuities in the final image plane of a coronagraph, using deformable mirrors. In this paper, we present several tools that can be used to optimize the performance of this technique for its application to future large missions. In particular, we analyzed the influence of the deformable setup (size and separating distance) and found that there is an optimal point for this setup, optimizing the performance of the instrument in contrast and throughput while minimizing the strokes applied to the deformable mirrors. These results will help us design future coronagraphic instruments to obtain the best performance.
Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396
Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.
Rodríguez-Dorado, Rosalia; Landín, Mariana; Altai, Ayça; Russo, Paola; Aquino, Rita P; Del Gaudio, Pasquale
2018-03-01
Numerous studies have been focused on hydrophobic compounds encapsulation as oils. In fact, oils can provide numerous health benefits as synergic ingredient combined with other hydrophobic active ingredients. However, stable microparticles for pharmaceutical purposes are difficult to achieve when commonly techniques are used. In this work, sunflower oil was encapsulated in calcium-alginate capsules by prilling technique in co-axial configuration. Core-shell beads were produced by inverse gelation directly at the nozzle using a w/o emulsion containing aqueous calcium chloride solution in sunflower oil pumped through the inner nozzle while an aqueous alginate solution, coming out from the annular nozzle, produced the beads shell. To optimize process parameters artificial intelligence tools were proposed to optimize the numerous prilling process variables. Homogeneous and spherical microcapsules with narrow size distribution and a thin alginate shell were obtained when the parameters as w/o constituents, polymer concentrations, flow rates and frequency of vibration were optimized by two commercial software, FormRules® and INForm®, which implement neurofuzzy logic and Artificial Neural Networks together with genetic algorithms, respectively. This technique constitutes an innovative approach for hydrophobic compounds microencapsulation. Copyright © 2018 Elsevier B.V. All rights reserved.
Modeling and Optimization for Morphing Wing Concept Generation
NASA Technical Reports Server (NTRS)
Skillen, Michael D.; Crossley, William A.
2007-01-01
This report consists of two major parts: 1) the approach to develop morphing wing weight equations, and 2) the approach to size morphing aircraft. Combined, these techniques allow the morphing aircraft to be sized with estimates of the morphing wing weight that are more credible than estimates currently available; aircraft sizing results prior to this study incorporated morphing wing weight estimates based on general heuristics for fixed-wing flaps (a comparable "morphing" component) but, in general, these results were unsubstantiated. This report will show that the method of morphing wing weight prediction does, in fact, drive the aircraft sizing code to different results and that accurate morphing wing weight estimates are essential to credible aircraft sizing results.
Okabe, Kenji; Jeewan, Horagodage Prabhath; Yamagiwa, Shota; Kawano, Takeshi; Ishida, Makoto; Akita, Ippei
2015-12-16
In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction.
Okabe, Kenji; Jeewan, Horagodage Prabhath; Yamagiwa, Shota; Kawano, Takeshi; Ishida, Makoto; Akita, Ippei
2015-01-01
In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction. PMID:26694407
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Liu, B; Li, Y
2016-06-15
Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descentmore » technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.« less
SEEK: A FORTRAN optimization program using a feasible directions gradient search
NASA Technical Reports Server (NTRS)
Savage, M.
1995-01-01
This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.
Denora, Nunzio; Lopedota, Angela; Perrone, Mara; Laquintana, Valentino; Iacobazzi, Rosa M; Milella, Antonella; Fanizza, Elisabetta; Depalo, Nicoletta; Cutrignelli, Annalisa; Lopalco, Antonio; Franco, Massimo
2016-10-01
This work describes N-acetylcysteine (NAC)- and glutathione (GSH)-glycol chitosan (GC) polymer conjugates engineered as potential platform useful to formulate micro-(MP) and nano-(NP) particles via spray-drying techniques. These conjugates are mucoadhesive over the range of urine pH, 5.0-7.0, which makes them advantageous for intravesical drug delivery and treatment of local bladder diseases. NAC- and GSH-GC conjugates were generated with a synthetic approach optimizing reaction times and purification in order to minimize the oxidation of thiol groups. In this way, the resulting amount of free thiol groups immobilized per gram of NAC- and GSH-GC conjugates was 6.3 and 3.6mmol, respectively. These polymers were completely characterized by molecular weight, surface sulfur content, solubility at different pH values, substitution and swelling degree. Mucoadhesion properties were evaluated in artificial urine by turbidimetric and zeta (ζ)-potential measurements demonstrating good mucoadhesion properties, in particular for NAC-GC at pH 5.0. Starting from the thiolated polymers, MP and NP were prepared using both the Büchi B-191 and Nano Büchi B-90 spray dryers, respectively. The resulting two formulations were evaluated for yield, size, oxidation of thiol groups and ex-vivo mucoadhesion. The new spray drying technique provided NP of suitable size (<1μm) for catheter administration, low degree of oxidation, and sufficient mucoadhesion property with 9% and 18% of GSH- and NAC-GC based NP retained on pig mucosa bladder after 3h of exposure, respectively. The aim of the present study was first to optimize the synthesis of NAC-GC and GSH-GC, and preserve the oxidation state of the thiol moieties by introducing several optimizations of the already reported synthetic procedures that increase the mucoadhesive properties and avoid pH-dependent aggregation. Second, starting from these optimized thiomers, we studied the feasibility of manufacturing MP and NP by spray-drying techniques. The aim of this second step was to produce mucoadhesive drug delivery systems of adequate size for vesical administration by catheter, and comparable mucoadhesive properties with respect to the processed polymers, avoiding thiolic oxidation during the formulation. MP with acceptable size produced by spray-dryer Büchi B-191 were compared with NP made with the apparatus Nano Büchi B-90. Copyright © 2016 Acta Materialia Inc. All rights reserved.
Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj
2015-01-01
Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.
Poirier, Frédéric J A M; Faubert, Jocelyn
2012-06-22
Facial expressions are important for human communications. Face perception studies often measure the impact of major degradation (e.g., noise, inversion, short presentations, masking, alterations) on natural expression recognition performance. Here, we introduce a novel face perception technique using rich and undegraded stimuli. Participants modified faces to create optimal representations of given expressions. Using sliders, participants adjusted 53 face components (including 37 dynamic) including head, eye, eyebrows, mouth, and nose shape and position. Data was collected from six participants and 10 conditions (six emotions + pain + gender + neutral). Some expressions had unique features (e.g., frown for anger, upward-curved mouth for happiness), whereas others had shared features (e.g., open eyes and mouth for surprise and fear). Happiness was different from other emotions. Surprise was different from other emotions except fear. Weighted sum morphing provides acceptable stimuli for gender-neutral and dynamic stimuli. Many features were correlated, including (1) head size with internal feature sizes as related to gender, (2) internal feature scaling, and (3) eyebrow height and eye openness as related to surprise and fear. These findings demonstrate the method's validity for measuring the optimal facial expressions, which we argue is a more direct measure of their internal representations.
Design of experiments for microencapsulation applications: A review.
Paulo, Filipa; Santos, Lúcia
2017-08-01
Microencapsulation techniques have been intensively explored by many research sectors such as pharmaceutical and food industries. Microencapsulation allows to protect the active ingredient from the external environment, mask undesired flavours, a possible controlled release of compounds among others. The purpose of this review is to provide a background of design of experiments in microencapsulation research context. Optimization processes are required for an accurate research in these fields and therefore, the right implementation of micro-sized techniques at industrial scale. This article critically reviews the use of the response surface methodologies in pharmaceutical and food microencapsulation research areas. A survey of optimization procedures in the literature, in the last few years is also presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Singh, Gurjeet; Sharma, Shailesh; Gupta, Ghanshyam Das
2017-07-01
The present study emphasized on the use of solid dispersion technology to triumph over the drawbacks associated with the highly effective antihypertensive drug telmisartan using different polymers (poloxamer 188 and locust bean gum) and methods (modified solvent evaporation and lyophilization). It is based on the comparison between selected polymers and methods for enhancing solubility through particle size reduction. The results showed different profiles for particle size, solubility, and dissolution of formulated amorphous systems depicting the great influence of polymer/method used. The resulting amorphous solid dispersions were characterized using x-ray diffraction (XRD), differential scanning calorimetry, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and particle size analysis. The optimized solid dispersion (TEL 19) prepared with modified locust bean gum using lyophilization technique showed reduced particle size of 184.5 ± 3.7 nm and utmost solubility of 702 ± 5.47 μg/mL in water, which is quite high as compared to the pure drug (≤1 μg/mL). This study showed that the appropriate selection of carrier may lead to the development of solid dispersion formulation with desired solubility and dissolution profiles. The optimized dispersion was later formulated into fast-dissolving tablets, and further optimization was done to obtain the tablets with desired properties.
Manga, Mohamed S; York, David W
2017-09-12
Stirred cell membrane emulsification (SCME) has been employed to prepare concentrated Pickering oil in water emulsions solely stabilized by fumed silica nanoparticles. The optimal conditions under which highly stable and low-polydispersity concentrated emulsions using the SCME approach are highlighted. Optimization of the oil flux rates and the paddle stirrer speeds are critical to achieving control over the droplet size and size distribution. Investigating the influence of oil volume fraction highlights the criticality of the initial particle loading in the continuous phase on the final droplet size and polydispersity. At a particle loading of 4 wt %, both the droplet size and polydispersity increase with increasing of the oil volume fraction above 50%. As more interfacial area is produced, the number of particles available in the continuous phase diminishes, and coincidently a reduction in the kinetics of particle adsorption to the interface resulting in larger polydisperse droplets occurs. Increasing the particle loading to 10 wt % leads to significant improvements in both size and polydispersity with oil volume fractions as high as 70% produced with coefficient of variation values as low as ∼30% compared to ∼75% using conventional homogenization techniques.
Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation
NASA Astrophysics Data System (ADS)
Sleesongsom, S.; Bureerat, S.
2018-03-01
This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.
Fabrication of aluminum-carbon composites
NASA Technical Reports Server (NTRS)
Novak, R. C.
1973-01-01
A screening, optimization, and evaluation program is reported of unidirectional carbon-aluminum composites. During the screening phase both large diameter monofilament and small diameter multifilament reinforcements were utilized to determine optimum precursor tape making and consolidation techniques. Difficulty was encountered in impregnating and consolidating the multifiber reinforcements. Large diameter monofilament reinforcement was found easier to fabricate into composites and was selected to carry into the optimization phase in which the hot pressing parameters were refined and the size of the fabricated panels was scaled up. After process optimization the mechanical properties of the carbon-aluminum composites were characterized in tension, stress-rupture and creep, mechanical fatigue, thermal fatigue, thermal aging, thermal expansion, and impact.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, A; Seabrook, G; Brakken, A
Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6.more » They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.« less
School Cost Functions: A Meta-Regression Analysis
ERIC Educational Resources Information Center
Colegrave, Andrew D.; Giles, Margaret J.
2008-01-01
The education cost literature includes econometric studies attempting to determine economies of scale, or estimate an optimal school or district size. Not only do their results differ, but the studies use dissimilar data, techniques, and models. To derive value from these studies requires that the estimates be made comparable. One method to do…
Oral controlled release optimization of pellets prepared by extrusion-spheronization processing.
Bianchini, R; Vecchio, C
1989-06-01
Controlled release high dosage forms of a typical drug such as Indobufen were prepared as multiple-unit doses by employing extrusion-spheronization processing and subsequently film coating operations. The effects of drug particle size, drug/binder ratio, extruder screen size and preparation reproducibility on the physical properties of the spherical granules were evaluated. Controlled release optimization was obtained on the same granules by coating with polymeric membranes of different thickness consisting of water-soluble and insoluble substances. Film coating was applied from an organic solution using pan coating technique. The drug diffusion is allowed by dissolution of part of the membrane leaving small channels of the polymer coat. Further preparations were conducted to evaluate coatings applied from aqueous dispersion (pseudolatex) using air suspension coating technique. In this system the drug diffusion is governed by the intrinsic pore network of the membrane. The most promising preparations having the desired in vitro release, were metered into hard capsules to obtain the drug unit dosage. Accelerated stability tests were carried out to assess the influence of time and the other storage parameters on the drug release profile.
Multiple-hopping trajectories near a rotating asteroid
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Zhang, Tian-Jiao; Li, Zhao; Li, Heng-Nian
2017-03-01
We present a study of the transfer orbits connecting landing points of irregular-shaped asteroids. The landing points do not touch the surface of the asteroids and are chosen several meters above the surface. The ant colony optimization technique is used to calculate the multiple-hopping trajectories near an arbitrary irregular asteroid. This new method has three steps which are as follows: (1) the search of the maximal clique of candidate target landing points; (2) leg optimization connecting all landing point pairs; and (3) the hopping sequence optimization. In particular this method is applied to asteroids 433 Eros and 216 Kleopatra. We impose a critical constraint on the target landing points to allow for extensive exploration of the asteroid: the relative distance between all the arrived target positions should be larger than a minimum allowed value. Ant colony optimization is applied to find the set and sequence of targets, and the differential evolution algorithm is used to solve for the hopping orbits. The minimum-velocity increment tours of hopping trajectories connecting all the landing positions are obtained by ant colony optimization. The results from different size asteroids indicate that the cost of the minimum velocity-increment tour depends on the size of the asteroids.
NASA Astrophysics Data System (ADS)
Triplett, Michael D.; Rathman, James F.
2009-04-01
Using statistical experimental design methodologies, the solid lipid nanoparticle design space was found to be more robust than previously shown in literature. Formulation and high shear homogenization process effects on solid lipid nanoparticle size distribution, stability, drug loading, and drug release have been investigated. Experimentation indicated stearic acid as the optimal lipid, sodium taurocholate as the optimal cosurfactant, an optimum lecithin to sodium taurocholate ratio of 3:1, and an inverse relationship between mixing time and speed and nanoparticle size and polydispersity. Having defined the base solid lipid nanoparticle system, β-carotene was incorporated into stearic acid nanoparticles to investigate the effects of introducing a drug into the base solid lipid nanoparticle system. The presence of β-carotene produced a significant effect on the optimal formulation and process conditions, but the design space was found to be robust enough to accommodate the drug. β-Carotene entrapment efficiency averaged 40%. β-Carotene was retained in the nanoparticles for 1 month. As demonstrated herein, solid lipid nanoparticle technology can be sufficiently robust from a design standpoint to become commercially viable.
Joint optimization of source, mask, and pupil in optical lithography
NASA Astrophysics Data System (ADS)
Li, Jia; Lam, Edmund Y.
2014-03-01
Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Sisniega, A; Zbijewski, W
Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less
Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R
2015-09-01
Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
NASA Astrophysics Data System (ADS)
Protopopescu, V.; D'Helon, C.; Barhen, J.
2003-06-01
A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.
Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.
McIntosh, Chris; Hamarneh, Ghassan
2012-01-01
We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.
NASA Astrophysics Data System (ADS)
Espitia, Paula Judith Perez; Soares, Nilda de Fátima Ferreira; Teófilo, Reinaldo F.; Vitor, Débora M.; Coimbra, Jane Sélia dos Reis; de Andrade, Nélio José; de Sousa, Frederico B.; Sinisterra, Rubén D.; Medeiros, Eber Antonio Alves
2013-01-01
Single primary nanoparticles of zinc oxide (nanoZnO) tend to form particle collectives, resulting in loss of antimicrobial activity. This work studied the effects of probe sonication conditions: power, time, and the presence of a dispersing agent (Na4P2O7), on the size of nanoZnO particles. NanoZnO dispersion was optimized by response surface methodology (RSM) and characterized by the zeta potential (ZP) technique. NanoZnO antimicrobial activity was investigated at different concentrations (1, 5, and 10 % w/w) against four foodborne pathogens and four spoilage microorganisms. The presence of the dispersing agent had a significant effect on the size of dispersed nanoZnO. Minimum size after sonication was 238 nm. An optimal dispersion condition was achieved at 200 W for 45 min of sonication in the presence of the dispersing agent. ZP analysis indicated that the ZnO nanoparticle surface charge was altered by the addition of the dispersing agent and changes in pH. At tested concentrations and optimal dispersion, nanoZnO had no antimicrobial activity against Pseudomonas aeruginosa, Lactobacillus plantarum, and Listeria monocytogenes. However, it did have antimicrobial activity against Escherichia coli, Salmonella choleraesuis, Staphylococcus aureus, Saccharomyces cerevisiae, and Aspergillus niger. Based on the exhibited antimicrobial activity of optimized nanoZnO against some foodborne pathogens and spoilage microorganisms, nanoZnO is a promising antimicrobial for food preservation with potential application for incorporation in polymers intended as food-contact surfaces.
Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R
2015-03-01
Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Badr-Eldin, Shaimaa M; Ahmed, Osamaa AA
2016-01-01
Sildenafil citrate (SLD) is a selective cyclic guanosine monophosphate-specific phosphodiesterase type 5 inhibitor used for the oral treatment of erectile dysfunction and, more recently, for other indications, including pulmonary hypertension. The challenges facing the oral administration of the drug include poor bioavailability and short duration of action that requires frequent administration. Thus, the objective of this work is to formulate optimized SLD nano-transfersomal transdermal films with enhanced and controlled permeation aiming at surmounting the previously mentioned challenges and hence improving the drug bioavailability. SLD nano-transfersomes were prepared using modified lipid hydration technique. Central composite design was applied for the optimization of SLD nano-transfersomes with minimized vesicular size. The independent variables studied were drug-to-phospholipid molar ratio, surfactant hydrophilic lipophilic balance, and hydration medium pH. The optimized SLD nano-transfersomes were developed and evaluated for vesicular size and morphology and then incorporated into hydroxypropyl methyl cellulose transdermal films. The optimized transfersomes were unilamellar and spherical in shape with vesicular size of 130 nm. The optimized SLD nano-transfersomal films exhibited enhanced ex vivo permeation parameters with controlled profile compared to SLD control films. Furthermore, enhanced bioavailability and extended absorption were demonstrated by SLD nano-transfersomal films as reflected by their significantly higher maximum plasma concentration (Cmax) and area under the curve and longer time to maxi mum plasma concentration (Tmax) compared to control films. These results highlighted the potentiality of optimized SLD nano-transfersomal films to enhance the transdermal permeation and the bioavailability of the drug with the possible consequence of reducing the dose and administration frequency. PMID:27103786
Badr-Eldin, Shaimaa M; Ahmed, Osamaa Aa
2016-01-01
Sildenafil citrate (SLD) is a selective cyclic guanosine monophosphate-specific phosphodiesterase type 5 inhibitor used for the oral treatment of erectile dysfunction and, more recently, for other indications, including pulmonary hypertension. The challenges facing the oral administration of the drug include poor bioavailability and short duration of action that requires frequent administration. Thus, the objective of this work is to formulate optimized SLD nano-transfersomal transdermal films with enhanced and controlled permeation aiming at surmounting the previously mentioned challenges and hence improving the drug bioavailability. SLD nano-transfersomes were prepared using modified lipid hydration technique. Central composite design was applied for the optimization of SLD nano-transfersomes with minimized vesicular size. The independent variables studied were drug-to-phospholipid molar ratio, surfactant hydrophilic lipophilic balance, and hydration medium pH. The optimized SLD nano-transfersomes were developed and evaluated for vesicular size and morphology and then incorporated into hydroxypropyl methyl cellulose transdermal films. The optimized transfersomes were unilamellar and spherical in shape with vesicular size of 130 nm. The optimized SLD nano-transfersomal films exhibited enhanced ex vivo permeation parameters with controlled profile compared to SLD control films. Furthermore, enhanced bioavailability and extended absorption were demonstrated by SLD nano-transfersomal films as reflected by their significantly higher maximum plasma concentration (C max) and area under the curve and longer time to maxi mum plasma concentration (T max) compared to control films. These results highlighted the potentiality of optimized SLD nano-transfersomal films to enhance the transdermal permeation and the bioavailability of the drug with the possible consequence of reducing the dose and administration frequency.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Optimization of few-mode-fiber based mode converter for mode division multiplexing transmission
NASA Astrophysics Data System (ADS)
Xie, Yiwei; Fu, Songnian; Zhang, Minming; Tang, M.; Shum, P.; Liu, Deming
2013-10-01
Few-mode-fiber (FMF) based mode division multiplexing (MDM) is a promising technique to further increase the transmission capacity of single mode fibers. We propose and numerically investigate a fiber-optical mode converter (MC) using long period gratings (LPGs) fabricated on the FMF by point-by-point CO2 laser inscription technique. In order to precisely excite three modes (LP01, LP11, and LP02), both untilted LPG and tilted LPG are comprehensively optimized through the length, index modulation depth, and tilt angle of the LPG in order to achieve a mode contrast ratio (MCR) of more than 20 dB with less wavelength dependence. It is found that the proposed MCs have obvious advantages of high MCR, low mode crosstalk, easy fabrication and maintenance, and compact size.
A novel approach for dimension reduction of microarray.
Aziz, Rabia; Verma, C K; Srivastava, Namita
2017-12-01
This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
Bei, Yong-Yan; Zhou, Xiao-Feng; You, Ben-Gang; Yuan, Zhi-Qiang; Chen, Wei-Liang; Xia, Peng; Liu, Yang; Jin, Yong; Hu, Xiao-Juan; Zhu, Qiao-Ling; Zhang, Chun-Ge; Zhang, Xue-Nong; Zhang, Liang
2013-01-01
Lactose-palmitoyl-trimethyl-chitosan (Lac-TPCS), a novel amphipathic self-assembled polymer, was synthesized for administration of insoluble drugs to reduce their adverse effects. The central composite design was used to study the preparation technique of harmine (HM)-loaded self-assembled micelles based on Lac-TPCS (Lac-TPCS/HM). Three preparation methods and single factors were screened, including solvent type, HM amount, hydration volume, and temperature. The optimal preparation technique was identified after investigating the influence of two independent factors, namely, HM amount and hydration volume, on four indexes, ie, encapsulation efficiency (EE), drug-loading amount (LD), particle size, and polydispersity index (PDI). Analysis of variance showed a high coefficient of determination of 0.916 to 0.994, thus ensuring a satisfactory adjustment of the predicted prescription. The maximum predicted values of the optimal prescription were 91.62%, 14.20%, 183.3 nm, and 0.214 for EE, LD, size, and PDI, respectively, when HM amount was 1.8 mg and hydration volume was 9.6 mL. HM-loaded micelles were successfully characterized by Fourier-transform infrared spectroscopy, differential scanning calorimetry, X-ray diffraction, and a fluorescence-quenching experiment. Sustained release of Lac-TPCS/HM reached 65.3% in 72 hours at pH 7.4, while free HM released about 99.7% under the same conditions.
NASA Technical Reports Server (NTRS)
Mu, Qiaozhen; Wu, Aisheng; Xiong, Xiaoxiong; Doelling, David R.; Angal, Amit; Chang, Tiejun; Bhatt, Rajendra
2017-01-01
MODIS reflective solar bands are calibrated on-orbit using a solar diffuser and near-monthly lunar observations. To monitor the performance and effectiveness of the on-orbit calibrations, pseudo-invariant targets such as deep convective clouds (DCCs), Libya-4, and Dome-C are used to track the long-term stability of MODIS Level 1B product. However, the current MODIS operational DCC technique (DCCT) simply uses the criteria set for the 0.65- m band. We optimize several critical DCCT parameters including the 11- micrometer IR-band Brightness Temperature (BT11) threshold for DCC identification, DCC core size and uniformity to help locate DCCs at convection centers, data collection time interval, and probability distribution function (PDF) bin increment for each channel. The mode reflectances corresponding to the PDF peaks are utilized as the DCC reflectances. Results show that the BT11 threshold and time interval are most critical for the Short Wave Infrared (SWIR) bands. The Bidirectional Reflectance Distribution Function model is most effective in reducing the DCC anisotropy for the visible channels. The uniformity filters and PDF bin size have minimal impacts on the visible channels and a larger impact on the SWIR bands. The newly optimized DCCT will be used for future evaluation of MODIS on-orbit calibration by MODIS Characterization Support Team.
Multiphase complete exchange: A theoretical analysis
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Complete Exchange requires each of N processors to send a unique message to each of the remaining N-1 processors. For a circuit switched hypercube with N = 2(sub d) processors, the Direct and Standard algorithms for Complete Exchange are optimal for very large and very small message sizes, respectively. For intermediate sizes, a hybrid Multiphase algorithm is better. This carries out Direct exchanges on a set of subcubes whose dimensions are a partition of the integer d. The best such algorithm for a given message size m could hitherto only be found by enumerating all partitions of d. The Multiphase algorithm is analyzed assuming a high performance communication network. It is proved that only algorithms corresponding to equipartitions of d (partitions in which the maximum and minimum elements differ by at most 1) can possibly be optimal. The run times of these algorithms plotted against m form a hull of optimality. It is proved that, although there is an exponential number of partitions, (1) the number of faces on this hull is Theta(square root of d), (2) the hull can be found in theta(square root of d) time, and (3) once it has been found, the optimal algorithm for any given m can be found in Theta(log d) time. These results provide a very fast technique for minimizing communication overhead in many important applications, such as matrix transpose, Fast Fourier transform, and ADI.
Optimization of wave rotors for use as gas turbine engine topping cycles
NASA Technical Reports Server (NTRS)
Wilson, Jack; Paxson, Daniel E.
1995-01-01
Use of a wave rotor as a topping cycle for a gas turbine engine can improve specific power and reduce specific fuel consumption. Maximum improvement requires the wave rotor to be optimized for best performance at the mass flow of the engine. The optimization is a trade-off between losses due to friction and passage opening time, and rotational effects. An experimentally validated, one-dimensional CFD code, which includes these effects, has been used to calculate wave rotor performance, and find the optimum configuration. The technique is described, and results given for wave rotors sized for engines with sea level mass flows of 4, 26, and 400 lb/sec.
Eye aberration analysis with Zernike polynomials
NASA Astrophysics Data System (ADS)
Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.
1998-06-01
New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.
Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach
NASA Technical Reports Server (NTRS)
Das, Santanu; Oza, Nikunj C.
2011-01-01
In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.
Procedures for analysis of debris relative to Space Shuttle systems
NASA Technical Reports Server (NTRS)
Kim, Hae Soo; Cummings, Virginia J.
1993-01-01
Debris samples collected from various Space Shuttle systems have been submitted to the Microchemical Analysis Branch. This investigation was initiated to develop optimal techniques for the analysis of debris. Optical microscopy provides information about the morphology and size of crystallites, particle sizes, amorphous phases, glass phases, and poorly crystallized materials. Scanning electron microscopy with energy dispersive spectrometry is utilized for information on surface morphology and qualitative elemental content of debris. Analytical electron microscopy with wavelength dispersive spectrometry provides information on the quantitative elemental content of debris.
Spacecraft configuration study for second generation mobile satellite system
NASA Technical Reports Server (NTRS)
Louie, M.; Vonstentzsch, W.; Zanella, F.; Hayes, R.; Mcgovern, F.; Tyner, R.
1985-01-01
A high power, high performance communicatons satellite bus being developed is designed to satisfy a broad range of multimission payload requirements in a cost effective manner and is compatible with both STS and expendable launchers. Results are presented of tradeoff studies conducted to optimize the second generation mobile satellite system for its mass, power, and physical size. Investigations of the 20-meter antenna configuration, transponder linearization techniques, needed spacecraft modifications, and spacecraft power, dissipation, mass, and physical size indicate that the advanced spacecraft bus is capable of supporting the required payload for the satellite.
Ahmad, Iqbal; Akhter, Sohail; Anwar, Mohammed; Zafar, Sobiya; Sharma, Rakesh Kumar; Ali, Asgar; Ahmad, Farhan Jalees
2017-05-15
The aim of this study was to develop Thymoquinone (TQ) loaded PEGylated liposomes using supercritical anti-solvent (SAS) process for enhanced blood circulation, and greater radioprotection. The SAS process of PEGylated liposomes synthesis was optimized by Box-Behnken design. Spherical liposomes with a particle size of 195.6±5.56nm and entrapment efficiency (%EE) of 89.4±3.69% were obtained. Optimized SAS process parameters; temperature, pressure and solution flow rate were 35°C, 140bar and 0.18mL/min, respectively, while 7.5mmol phospholipid, 0.75mmol of cholesterol, and 1mmol TQ were optimized formulation ingredients. Incorporation of MPEG-2000-DSPE (5% w/w) provided the PEGylated liposomes (FV-17B; particle size=231.3±6.74nm, %EE=91.9±3.45%, maximum TQ release >70% in 24h). Pharmacokinetics of FV-17B in mice demonstrated distinctly superior systemic circulation time for TQ in plasma. Effectiveness of radioprotection by FV-17B in mice model was demonstrated by non-significant body weight change, normal vital blood components (WBCs, RBCs, and Platelets), micronuclei and spleen index and increased survival probability in post irradiation animal group as compared to controls (plain TQ and marketed formulation). Altogether, the results anticipated that the SAS process could serve as a single step environmental friendly technique for the development of stable long circulating TQ loaded liposomes for effective radioprotection. Copyright © 2017 Elsevier B.V. All rights reserved.
Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J
2013-08-01
There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each specimen. Each pedicle was incrementally tapped to increasing size (3.75, 4.00, 4.50, and 5.50 mm) until the threshold value was reached based on the assigned group. Pedicle screw size was determined by adding 1 mm to the tap size that crossed the threshold torque value. Torque measurements were recorded with each revolution during tap and pedicle screw insertion. Each specimen was then individually potted and pedicle screws pulled out "in-line" with the screw axis at a rate of 0.25 mm/sec. Peak pullout strength (POS) was measured in Newtons (N). The peak tapping IT was significantly increased (50%) in Group 2 (3.23 ± 0.65 in-lbs) compared with Group 1 (2.15 ± 0.56 in-lbs) (p=.0005). The peak screw IT was also significantly increased (19%) in Group 2 (8.99 ± 2.27 in-lbs) compared with Group 1 (7.52 ± 2.96 in-lbs) (p=.02). The pedicle screw pullout strength was also significantly increased (23%) in Group 2 (877.9 ± 235.2 N) compared with Group 1 (712.3 ± 223.1 N) (p=.017). The mean pedicle screw diameter was significantly increased in Group 2 (5.70 ± 1.05 mm) compared with Group 1 (5.00 ± 0.80 mm) (p=.0002). There was also an increased rate of optimal pedicle screw size selection in Group 2 with 9 of 15 (60%) pedicle screws compared with Group 1 with 4 of 15 (26.7%) pedicle screws within 1 mm of the measured pedicle width. There was a moderate correlation for tapping IT with both screw IT (r=0.54; p=.002) and pedicle screw POS (r=0.55; p=.002). Our findings suggest that tapping IT directly correlates with pedicle screw IT, pedicle screw pullout strength, and optimal pedicle screw size. Therefore, tapping IT may be used during thoracic pedicle screw instrumentation as an adjunct to preoperative imaging and clinical experience to maximize fixation strength and optimize pedicle "fit and fill" with the largest screw possible. However, further prospective, in vivo studies are necessary to evaluate the intraoperative use of tapping IT to predict screw loosening/complications. Published by Elsevier Inc.
Development of pH sensitive microparticles of Karaya gum: By response surface methodology.
Raizaday, Abhay; Yadav, Hemant K S; Kumar, S Hemanth; Kasina, Susmitha; Navya, M; Tashi, C
2015-12-10
The objective of the proposed work was to prepare pH sensitive microparticles (MP) of Karaya gum using distilled water as a solvent by spray drying technique. Different formulations were designed, prepared and evaluated by employing response surface methodology and optimal design of experiment technique using Design Expert(®) ver 8.0.1 software. SEM photographs showed that MP were roughly spherical in shape and free from cracks. The particle size and encapsulation efficiency for optimized MP was found to be between 3.89 and 6.5 μm and 81-94% respectively with good flow properties. At the end of the 12th hour the in vitro drug release was found to be 96.9% for the optimized formulation in pH 5.6 phosphate buffer. Low prediction errors were observed for Cmax and AUC0-∞ which demonstrated that the Frusemide IVIVC model was valid. Hence it can be concluded that pH sensitive MP of Karaya gum were effectively prepared by spray drying technique using aqueous solvents and can be used for treating various diseases like chronic hypertension, Ulcerative Colitis and Diverticulitis. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Patel, Vinay Kumar; Chauhan, Shivani; Katiyar, Jitendra Kumar
2018-04-01
In this study, a novel natural fiber i.e. Sour-weed botanically known as ‘Rumex acetosella’ has been first time introduced as natural reinforcements to polyester matrix. The natural fiber based polyester composites were fabricated by hand lay-up technique using different sizes and different weight percentages. In Sour-weed/Polyester composites, physical (density, water absorption and hardness), mechanical properties (tensile and impact properties) and wear properties (sand abrasion and sliding wear) were investigated for different sizes of sour weed of 0.6 mm, 5 mm, 10 mm, 15 mm and 20 mm at 3, 6 and 9 weight percent loading, respectively in polyester matrix. Furthermore, on average value of results, the multi-criteria optimization technique i.e. TOPSIS was employed to decide the ranking of the composites. From the optimized results, it was observed that Sour-weed composite reinforced with fiber’s size of 15 mm at 6 wt% loading demonstrated the best ranked composite exhibiting best overall properties as average tensile strength of 34.33 MPa, average impact strength of 10 Joule, average hardness of 12 Hv, average specific sand abrasion wear rate of 0.0607 mm3 N‑1m‑1, average specific sliding wear rate of 0.002 90 mm3 N‑1m‑1, average percentage of water absorption of 3.446% and average density of 1.013 among all fabricated composites.
Large-volume protein crystal growth for neutron macromolecular crystallography.
Ng, Joseph D; Baird, James K; Coates, Leighton; Garcia-Ruiz, Juan M; Hodge, Teresa A; Huang, Sijay
2015-04-01
Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.
Shrimankar, D D; Sathe, S R
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.
Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W
2016-01-01
Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies. © 2015 John Wiley & Sons Ltd.
Williams, P Stephen
2016-05-01
Asymmetrical flow field-flow fractionation (As-FlFFF) has become the most commonly used of the field-flow fractionation techniques. However, because of the interdependence of the channel flow and the cross flow through the accumulation wall, it is the most difficult of the techniques to optimize, particularly for programmed cross flow operation. For the analysis of polydisperse samples, the optimization should ideally be guided by the predicted fractionating power. Many experimentalists, however, neglect fractionating power and rely on light scattering detection simply to confirm apparent selectivity across the breadth of the eluted peak. The size information returned by the light scattering software is assumed to dispense with any reliance on theory to predict retention, and any departure of theoretical predictions from experimental observations is therefore considered of no importance. Separation depends on efficiency as well as selectivity, however, and efficiency can be a strong function of retention. The fractionation of a polydisperse sample by field-flow fractionation never provides a perfectly separated series of monodisperse fractions at the channel outlet. The outlet stream has some residual polydispersity, and it will be shown in this manuscript that the residual polydispersity is inversely related to the fractionating power. Due to the strong dependence of light scattering intensity and its angular distribution on the size of the scattering species, the outlet polydispersity must be minimized if reliable size data are to be obtained from the light scattering detector signal. It is shown that light scattering detection should be used with careful control of fractionating power to obtain optimized analysis of polydisperse samples. Part I is concerned with isocratic operation of As-FlFFF, and part II with programmed operation.
Shrimankar, D. D.; Sathe, S. R.
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868
Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B
2013-03-01
Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.
Malzert-Fréon, A; Hennequin, D; Rault, S
2010-11-01
Lipidic nanoparticles (NP), formulated from a phase inversion temperature process, have been studied with chemometric techniques to emphasize the influence of the four major components (Solutol®, Labrasol®, Labrafac®, water) on their average diameter and their distribution in size. Typically, these NP present a monodisperse size lower than 200 nm, as determined by dynamic light scattering measurements. From the application of the partial least squares (PLS) regression technique to the experimental data collected during definition of the feasibility zone, it was established that NP present a core-shell structure where Labrasol® is well encapsulated and contributes to the structuring of the NP. Even if this solubility enhancer is regarded as a pure surfactant in the literature, it appears that the oil moieties of this macrogolglyceride mixture significantly influence its properties. Furthermore, results have shown that PLS technique can be also used for predictions of sizes for given relative proportions of components and it was established that from a mixture design, the quantitative mixture composition to use in order to reach a targeted size and a targeted polydispersity index (PDI) can be easily predicted. Hence, statistical models can be a useful tool to control and optimize the characteristics in size of NP. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Automatic differentiation evaluated as a tool for rotorcraft design and optimization
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.
1995-01-01
This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.
The dual role of fragments in fragment-assembly methods for de novo protein structure prediction
Handl, Julia; Knowles, Joshua; Vernon, Robert; Baker, David; Lovell, Simon C.
2013-01-01
In fragment-assembly techniques for protein structure prediction, models of protein structure are assembled from fragments of known protein structures. This process is typically guided by a knowledge-based energy function and uses a heuristic optimization method. The fragments play two important roles in this process: they define the set of structural parameters available, and they also assume the role of the main variation operators that are used by the optimiser. Previous analysis has typically focused on the first of these roles. In particular, the relationship between local amino acid sequence and local protein structure has been studied by a range of authors. The correlation between the two has been shown to vary with the window length considered, and the results of these analyses have informed directly the choice of fragment length in state-of-the-art prediction techniques. Here, we focus on the second role of fragments and aim to determine the effect of fragment length from an optimization perspective. We use theoretical analyses to reveal how the size and structure of the search space changes as a function of insertion length. Furthermore, empirical analyses are used to explore additional ways in which the size of the fragment insertion influences the search both in a simulation model and for the fragment-assembly technique, Rosetta. PMID:22095594
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Modelling and optimization of semi-solid processing of 7075 Al alloy
NASA Astrophysics Data System (ADS)
Binesh, B.; Aghaie-Khafri, M.
2017-09-01
The new modified strain-induced melt activation (SIMA) process presented by Binesh and Aghaie-Khafri was optimized using a response surface methodology to improve the thixotropic characteristics of semi-solid 7075 alloy. The responses, namely the average grain size and the shape factor, were considered as functions of three independent input variables: effective strain, isothermal holding temperature and time. Mathematical models for the responses were developed using the regression analysis technique, and the adequacy of the models was validated by the analysis of variance method. The calculated results correlated fairly well with the experiments. It was found that all the first- and second-order terms of the independent parameters and the interactive terms of the effective strain and holding time were statistically significant for the responses. In order to simultaneously optimize the responses, the desirable values for the effective strain, holding temperature and time were predicted to be 5.1, 609 °C and 14 min, respectively, when employing the desirability function approach. Based on the optimization results, a significant improvement in the average grain size and shape factor of the semi-solid slurry prepared by the new modified SIMA process was observed.
Moolakkadath, Thasleem; Aqil, Mohd; Ahad, Abdul; Imam, Syed Sarim; Iqbal, Babar; Sultana, Yasmin; Mujeeb, Mohd; Iqbal, Zeenat
2018-05-07
The present study was conducted for the optimization of transethosomes formulation for dermal fisetin delivery. The optimization of the formulation was carried out using "Box-Behnken design". The independent variables were Lipoid S 100, ethanol and sodium cholate. The prepared formulations were characterized for vesicle size, entrapment efficiency and in vitro skin penetration study. The vesicles-skin interaction, confocal laser scanning microscopy and dermatokinetic studies were performed with optimized formulation. Results of the present study demonstrated that the optimized formulation presented vesicle size of 74.21 ± 2.65 nm, zeta potential of -11.0 mV, entrapment efficiency of 68.31 ± 1.48% and flux of 4.13 ± 0.17 µg/cm 2 /h. The TEM image of optimized formulation exhibited sealed and spherical shape vesicles. Results of thermoanalytical techniques demonstrated that the prepared transethosomes vesicles formulation had fluidized the rigid membrane of rat's skin for smoother penetration of fisetin transethosomes. The confocal study results presented well distribution and penetration of Rhodamine B loaded transethosomes vesicles formulation up to deeper layers of the rat's skin as compared to the Rhodamine B-hydro alcoholic solution. Present study data revealed that the developed transethosomes vesicles formulation was found to be a potentially useful drug carrier for fisetin dermal delivery.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
Haggag, Sawsan M S; Farag, A A M; Abdel Refea, M
2013-02-01
Nano Al(III)-8-hydroxy-5-nitrosoquinolate [Al(III)-(HNOQ)(3)] thin films were synthesized by the rapid, direct, simple and efficient successive ion layer adsorption and reaction (SILAR) technique. Thin film formation optimized factors were evaluated. Stoichiometry and structure were confirmed by elemental analysis and FT-IR. The particle size (27-71 nm) was determined using scanning electron microscope (SEM). Thermal stability and thermal parameters were determined by thermal gravimetric analysis (TGA). Optical properties were investigated using spectrophotometric measurements of transmittance and reflectance at normal incidence. Refractive index, n, and absorption index, k, were determined. Spectral behavior of the absorption coefficient in the intrinsic absorption region revealed a direct allowed transition with 2.45 eV band gap. The current-voltage (I-V) characteristics of [Al(III)-(HNOQ)(3)]/p-Si heterojunction was measured at room temperature. The forward and reverse I-V characteristics were analyzed. The calculated zero-bias barrier height (Φ(b)) and ideality factor (n) showed strong bias dependence. Energy distribution of interface states (N(ss)) was obtained. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.
2005-01-01
Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.
Oster, C G; Kissel, T
2005-05-01
Recently, several research groups have shown the potential of microencapsulated DNA as adjuvant for DNA immunization and in tissue engineering approaches. Among techniques generally used for microencapsulation of hydrophilic drug substances into hydrophobic polymers, modified WOW double emulsion method and spray drying of water-in-oil dispersions take a prominent position. The key parameters for optimized microspheres are particle size, encapsulation efficiency, continuous DNA release and stabilization of DNA against enzymatic and mechanical degradation. This study investigates the possibility to encapsulate DNA avoiding shear forces which readily degrade DNA during this microencapsulation. DNA microparticles were prepared with polyethylenimine (PEI) as a complexation agent for DNA. Polycations are capable of stabilizing DNA against enzymatic, as well as mechanical degradation. Further, complexation was hypothesized to facilitate the encapsulation by reducing the size of the macromolecule. This study additionally evaluated the possibility of encapsulating lyophilized DNA and lyophilized DNA/PEI complexes. For this purpose, the spray drying and double emulsion techniques were compared. The size of the microparticles was characterized by laser diffractometry and the particles were visualized by scanning electron microscopy (SEM). DNA encapsulation efficiencies were investigated photometrically after complete hydrolysis of the particles. Finally, the DNA release characteristics from the particles were studied. Particles with a size of <10 microm which represent the threshold for phagocytic uptake could be prepared with these techniques. The encapsulation efficiency ranged from 100-35% for low theoretical DNA loadings. DNA complexation with PEI 25?kDa prior to the encapsulation process reduced the initial burst release of DNA for all techniques used. Spray-dried particles without PEI exhibited high burst releases, whereas double emulsion techniques showed continuous release rates.
NASA Astrophysics Data System (ADS)
Song, Young Joo; Woo, Jong Hun; Shin, Jong Gye
2009-12-01
Today, many middle-sized shipbuilding companies in Korea are experiencing strong competition from shipbuilding companies in other nations. This competition is particularly affecting small- and middle-sized shipyards, rather than the major shipyards that have their own support systems and development capabilities. The acquisition of techniques that would enable maximization of production efficiency and minimization of the gap between planning and execution would increase the competitiveness of small- and middle-sized Korean shipyards. In this paper, research on a simulation-based support system for ship production management, which can be applied to the shipbuilding processes of middle-sized shipbuilding companies, is presented. The simulation research includes layout optimization, load balancing, work stage operation planning, block logistics, and integrated material management. Each item is integrated into a network system with a value chain that includes all shipbuilding processes.
Hamdan, Sadeque; Cheaitou, Ali
2017-08-01
This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.
Removal of 10-nm contaminant particles from Si wafers using CO2 bullet particles.
Kim, Inho; Hwang, Kwangseok; Lee, Jinwon
2012-04-11
Removal of nanometer-sized contaminant particles (CPs) from substrates is essential in successful fabrication of nanoscale devices. The particle beam technique that uses nanometer-sized bullet particles (BPs) moving at supersonic velocity was improved by operating it at room temperature to achieve higher velocity and size uniformity of BPs and was successfully used to remove CPs as small as 10 nm. CO2 BPs were generated by gas-phase nucleation and growth in a supersonic nozzle; appropriate size and velocity of the BPs were obtained by optimizing the nozzle contours and CO2/He mixture fraction. Cleaning efficiency greater than 95% was attained. BP velocity was the most important parameter affecting removal of CPs in the 10-nm size range. Compared to cryogenic Ar or N2 particles, CO2 BPs were more uniform in size and had higher velocity and, therefore, cleaned CPs more effectively.
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Saad, Wael E; Nicholson, David B
2013-06-01
Since the conception of balloon-occluded retrograde transvenous obliteration (BRTO) of gastric varices 25 years ago, the placement of an indwelling balloon for hours has been central to the BRTO procedure. Numerous variables and variations of the BRTO procedure have been described, including methods to reduce sclerosant, combining percutaneous transhepatic obliteration, varying sclerosant, and using multiple sclerosants within the same procedure. However, the consistent feature of BRTO has always remained the indwelling balloon. Placing an indwelling balloon over hours for the BRTO procedure is a logistical burden that taxes the interventional radiology team and hospital resources. Substituting the balloon with hardware (coils or Amplatzer vascular plugs [AVPs] or both) is technically feasible and its risks most likely correlate with gastrorenal shunt (GRS) size. The current authors use packed 0.018- or 0.035-in coils or both for small gastric variceal systems (GRS size A and B) and AVPs for GRS sizes up to size E (from size A-E). The current authors recommend an indwelling balloon (no hardware substitute) for very large gastric variceal system (GRS size F). Substituting the indwelling balloon for hardware in size F and potentially size E GRS can also be risky. The current article describes the techniques of placing up to 16-mm AVPs through balloon occlusion guide catheters and then deflating the balloon once it has been substituted with the AVPs. In addition, 22-mm AVPs can be placed through sheaths once the balloon occlusion catheters are removed to further augment the 16-mm Amplatzer occlusion. To date, there are no studies describing, let alone evaluating, the clinical feasibility of performing BRTO without indwelling balloons. The described techniques have been successfully performed by the current authors. However, the long-term safety and effectiveness of these techniques is yet to be determined. Copyright © 2013 Elsevier Inc. All rights reserved.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Krishna, Hemanth; Kumar, Hemantha; Gangadharan, Kalluvalappil
2017-08-01
A magneto rheological (MR) fluid damper offers cost effective solution for semiactive vibration control in an automobile suspension. The performance of MR damper is significantly depends on the electromagnetic circuit incorporated into it. The force developed by MR fluid damper is highly influenced by the magnetic flux density induced in the fluid flow gap. In the present work, optimization of electromagnetic circuit of an MR damper is discussed in order to maximize the magnetic flux density. The optimization procedure was proposed by genetic algorithm and design of experiments techniques. The result shows that the fluid flow gap size less than 1.12 mm cause significant increase of magnetic flux density.
Paswan, Suresh K; Saini, T R
2017-12-01
The emulsifiers in an exceedingly higher level are used in the preparation of drug loaded polymeric nanoparticles prepared by emulsification solvent evaporation method. This creates great problem to the formulator due to their serious toxicities when it is to be administered by parenteral route. The final product is therefore required to be freed from the used surfactants by the conventional purification techniques which is a cumbersome job. The solvent resistant stirred cell ultrafiltration unit (Millipore) was used in this study using polyethersulfone ultrafiltration membrane (Biomax®) having pore size of NMWL 300 KDa as the membrane filter. The purification efficiency of this technique was compared with the conventional centrifugation technique. The flow rate of ultrafiltration was optimized for removal of surfactant (polyvinyl alcohol) impurities to the acceptable levels in 1-3.5 h from the nanoparticle dispersion of tamoxifen prepared by emulsification solvent evaporation method. The present investigations demonstrate the application of solvent resistant stirred cell ultrafiltration technique for removal of toxic impurities of surfactant (PVA) from the polymeric drug nanoparticles (tamoxifen) prepared by emulsification solvent evaporation method. This technique offers added benefit of producing more concentrated nanoparticles dispersion without causing significant particle size growth which is observed in other purification techniques, e.g., centrifugation and ultracentrifugation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them tomore » make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them tomore » make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.« less
Mahdizadeh Barzoki, Zahra; Emam-Djomeh, Zahra; Mortazavian, Elaheh; Rafiee-Tehrani, Niyousha; Behmadi, Homa; Rafiee-Tehrani, Morteza; Moosavi-Movahedi, Ali Akbar
2018-06-01
This study aims at the mathematical optimization by Box-Behnken statistical design, fabrication by ionic gelation technique and in vitro characterization of insulin nanoparticles containing thiolated N- dimethyl ethyl chitosan (DMEC-Cys) conjugate. Then Optimized insulin nanoparticles were loaded into the buccal film, and in-vitro drug release from films was investigated, and diffusion coefficient was predicted. The optimized nanoparticles were shown to have mean particle size diameter of 148nm, zeta potential of 15.5mV, PdI of 0.26 and AE of 97.56%. Cell viability after incubation with optimized nanoparticles and films were assessed using an MTT biochemical assay. In vitro release study, FTIR and cytotoxicity also indicated that nanoparticles made of this thiolated polymer are suitable candidates for oral insulin delivery. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Investigation of thermal conduction in symmetric and asymmetric nanoporous structures
NASA Astrophysics Data System (ADS)
Yu, Ziqi; Ferrer-Argemi, Laia; Lee, Jaeho
2017-12-01
Nanoporous structures with a critical dimension comparable to or smaller than the phonon mean free path have demonstrated significant thermal conductivity reductions that are attractive for thermoelectric applications, but the presence of various geometric parameters complicates the understanding of governing mechanisms. Here, we use a ray tracing technique to investigate phonon boundary scattering phenomena in Si nanoporous structures of varying pore shapes, pore alignments, and pore size distributions, and identify mechanisms that are primarily responsible for thermal conductivity reductions. Our simulation results show that the neck size, or the smallest distance between nearest pores, is the key parameter in understanding nanoporous structures of varying pore shapes and the same porosities. When the neck size and the porosity are both identical, asymmetric pore shapes provide a lower thermal conductivity compared with symmetric pore shapes, due to localized heat fluxes. Asymmetric nanoporous structures show possibilities of realizing thermal rectification even with fully diffuse surface boundaries, in which optimal arrangements of triangular pores show a rectification ratio up to 13 when the injection angles are optimally controlled. For symmetric nanoporous structures, hexagonal-lattice pores achieve larger thermal conductivity reductions than square-lattice pores due to the limited line of sight for phonons. We also show that nanoporous structures of alternating pore size distributions from large to small pores yield a lower thermal conductivity compared with those of uniform pore size distributions in the given porosity. These findings advance the understanding of phonon boundary scattering phenomena in complex geometries and enable optimal designs of artificial nanostructures for thermoelectric energy harvesting and solid-state cooling systems.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.
LLE Review 117 (October-December 2008)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bittle, W., editor
2009-05-28
This volume of the LLE Review, covering October-December 2008, features 'Demonstration of the Shock-Timing Technique for Ignition Targets at the National Ignition Facility' by T. R. Boehly, V. N. Goncharov, S. X. Hu, J. A. Marozas, T. C. Sangster, D. D. Meyerhofer (LLE), D. Munro, P. M. Celliers, D. G. Hicks, G. W. Collins, H. F. Robey, O. L. Landen (LLNL), and R. E. Olson (SNL). In this article (p. 1) the authors report on a technique to measure the velocity and timing of shock waves in a capsule contained within hohlraum targets. This technique is critical for optimizing themore » drive profiles for high-performance inertial-confinement-fusion capsules, which are compressed by multiple precisely timed shock waves. The shock-timing technique was demonstrated on OMEGA using surrogate hohlraum targets heated to 180 eV and fitted with a re-entrant cone and quartz window to facilitate velocity measurements using velocity interferometry. Cryogenic experiments using targets filled with liquid deuterium further demonstrated the entire timing technique in a hohlraum environment. Direct-drive cryogenic targets with multiple spherical shocks were also used to validate this technique, including convergence effects at relevant pressures (velocities) and sizes. These results provide confidence that shock velocity and timing can be measured in NIF ignition targets, thereby optimizing these critical parameters.« less
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Support vector machine firefly algorithm based optimization of lens system.
Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah
2015-01-01
Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Quantum money with nearly optimal error tolerance
NASA Astrophysics Data System (ADS)
Amiri, Ryan; Arrazola, Juan Miguel
2017-06-01
We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Tapping mode SPM local oxidation nanolithography with sub-10 nm resolution
NASA Astrophysics Data System (ADS)
Nishimura, S.; Ogino, T.; Takemura, Y.; Shirakashi, J.
2008-03-01
Tapping mode SPM local oxidation nanolithography with sub-10 nm resolution is investigated by optimizing the applied bias voltage (V), scanning speed (S) and the oscillation amplitude of the cantilever (A). We fabricated Si oxide wires with an average width of 9.8 nm (V = 17.5 V, S = 250 nm/s, A = 292 nm). In SPM local oxidation with tapping mode operation, it is possible to decrease the size of the water meniscus by enhancing the oscillation amplitude of cantilever. Hence, it seems that the water meniscus with sub-10 nm dimensions could be formed by precisely optimizing the oxidation conditions. Moreover, we quantitatively explain the size (width and height) of Si oxide wires with a model based on the oxidation ratio, which is defined as the oxidation time divided by the period of the cantilever oscillation. The model allows us to understand the mechanism of local oxidation in tapping mode operation with amplitude modulation. The results imply that the sub-10 nm resolution could be achieved using tapping mode SPM local oxidation technique with the optimization of the cantilever dynamics.
Large-volume protein crystal growth for neutron macromolecular crystallography
Ng, Joseph D.; Baird, James K.; Coates, Leighton; ...
2015-03-30
Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less
Large-volume protein crystal growth for neutron macromolecular crystallography
Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay
2015-01-01
Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations. PMID:25849493
Large-volume protein crystal growth for neutron macromolecular crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, Joseph D.; Baird, James K.; Coates, Leighton
Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, P.; Zbijewski, W.; Gang, G. J.
2011-10-15
Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less
An optimized nanoparticle separator enabled by electron beam induced deposition
NASA Astrophysics Data System (ADS)
Fowlkes, J. D.; Doktycz, M. J.; Rack, P. D.
2010-04-01
Size-based separations technologies will inevitably benefit from advances in nanotechnology. Direct-write nanofabrication provides a useful mechanism for depositing/etching nanoscale elements in environments otherwise inaccessible to conventional nanofabrication techniques. Here, electron beam induced deposition was used to deposit an array of nanoscale features in a 3D environment with minimal material proximity effects outside the beam-interaction region. Specifically, the membrane component of a nanoparticle separator was fabricated by depositing a linear array of sharply tipped nanopillars, with a singular pitch, designed for sub-50 nm nanoparticle permeability. The nanopillar membrane was used in a dual capacity to control the flow of nanoparticles in the transaxial direction of the array while facilitating the sealing of the cellular-sized compartment in the paraxial direction. An optimized growth recipe resulted which (1) maximized the growth efficiency of the membrane (which minimizes proximity effects) and (2) preserved the fidelity of the spacing between nanopillars (which maximizes the size-based gating quality of the membrane) while (3) maintaining sharp nanopillar apexes for impaling an optically transparent polymeric lid critical for device sealing.
Pourmortazavi, Seied Mahdi; Taghdiri, Mehdi; Makari, Vajihe; Rahimi-Nasrabadi, Mehdi
2015-02-05
The present study is dealing with the green synthesis of silver nanoparticles using the aqueous extract of Eucalyptus oleosa as a green synthesis procedure without any catalyst, template or surfactant. Colloidal silver nanoparticles were synthesized by reacting aqueous AgNO3 with E. oleosa leaf extract at non-photomediated conditions. The significance of some synthesis conditions such as: silver nitrate concentration, concentration of the plant extract, time of synthesis reaction and temperature of plant extraction procedure on the particle size of synthesized silver particles was investigated and optimized. The participations of the studied factors in controlling the particle size of reduced silver were quantitatively evaluated via analysis of variance (ANOVA). The results of this investigation showed that silver nanoparticles could be synthesized by tuning significant parameters, while performing the synthesis procedure at optimum conditions leads to form silver nanoparticles with 21nm as averaged size. Ultraviolet-visible spectroscopy was used to monitor the development of silver nanoparticles formation. Meanwhile, produced silver nanoparticles were characterized by scanning electron microscopy, energy-dispersive X-ray, and FT-IR techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Whittington, P N; George, N
1992-08-05
The optimization of microbial flocculation for subsequent biomass separation must relate the floc properties to separation process criteria. The effects of flocculant type, dose, and hydrodynamic conditions on floc formation in laminar tube flow were determined for an Escherichia coli system. Combined with an on-line aggregation sensor, this technique allows the flocculation process to be rapidly optimized. This is important, because interbatch variation in fermentation broth has consequences for flocculation control and subsequent downstream processing. Changing tube diameter and length while maintaining a constant flow rate allowed independent study of the effects of shear and time on the flocculation rate and floc characteristics. Tube flow at higher shear rates increased the rate and completeness of flocculation, but reduced the maximum floc size attained. The mechanism for this size limitation does not appear to be fracture or erosion of existing flocs. Rearrangement of particles within the flocs appears to be most likely. The Camp number predicted the extent of flocculation obtained in terms of the reduction in primary particle number, but not in terms of floc size.
Efficient QoS-aware Service Composition
NASA Astrophysics Data System (ADS)
Alrifai, Mohammad; Risse, Thomas
Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.
Ahmed, Tarek A; El-Say, Khalid M
2016-06-10
The goal was to develop an optimized transdermal finasteride (FNS) film loaded with drug microplates (MIC), utilizing two-step optimization, to decrease the dosing schedule and inconsistency in gastrointestinal absorption. First; 3-level factorial design was implemented to prepare optimized FNS-MIC of minimum particle size. Second; Box-Behnken design matrix was used to develop optimized transdermal FNS-MIC film. Interaction among MIC components was studied using physicochemical characterization tools. Film components namely; hydroxypropyl methyl cellulose (X1), dimethyl sulfoxide (X2) and propylene glycol (X3) were optimized for their effects on the film thickness (Y1) and elongation percent (Y2), and for FNS steady state flux (Y3), permeability coefficient (Y4), and diffusion coefficient (Y5) following ex-vivo permeation through the rat skin. Morphological study of the optimized MIC and transdermal film was also investigated. Results revealed that stabilizer concentration and anti-solvent percent were significantly affecting MIC formulation. Optimized FNS-MIC of particle size 0.93μm was successfully prepared in which there was no interaction observed among their components. An enhancement in the aqueous solubility of FNS-MIC by more than 23% was achieved. All the studied variables, most of their interaction and quadratic effects were significantly affecting the studied variables (Y1-Y5). Morphological observation illustrated non-spherical, short rods, flakes like small plates that were homogeneously distributed in the optimized transdermal film. Ex-vivo study showed enhanced FNS permeation from film loaded MIC when compared to that contains pure drug. So, MIC is a successful technique to enhance aqueous solubility and skin permeation of poor water soluble drug especially when loaded into transdermal films. Copyright © 2016 Elsevier B.V. All rights reserved.
Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María
2017-12-15
Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system
NASA Astrophysics Data System (ADS)
Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU
2018-03-01
The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.
Optimizing communication satellites payload configuration with exact approaches
NASA Astrophysics Data System (ADS)
Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi
2015-12-01
The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.
Upadhyay, Mansi; Adena, Sandeep Kumar Reddy; Vardhan, Harsh; Yadav, Sarita K; Mishra, Brahmeshwar
2018-04-27
The research aims the development and optimization of capecitabine loaded interpenetrating polymeric network by ionotropic gelation method using polymers locust bean gum and sodium alginate by QbD approach. FMEA was performed to recognize the risks influencing CQAs. BBD was applied to study the effect of factors (polymer ratio, amount of cross-linker and curing time) on responses (particle size, % drug entrapment and % drug release). Polynomial equations and 3-D graphs were plotted to relate between factors and responses. The results of the optimized batch viz. particle size (457.92 ± 1.6 μm), % drug entrapment (74.11 ± 3.1%) and % drug release (90.23 ± 2.1%) were close to the predicted values generated by Minitab® 17. Characterization techniques SEM, EDX, FTIR, DSC and XRD were also performed for the optimized batch. To study the water transport inside IPN microbeads, swelling study was done. In vitro drug release of optimized batch showed controlled drug release for 12 h. Pharmacokinetic study carried out following oral administration in Albino Wistar rats exhibited that optimized microbeads had better PK parameters than free drug. In vitro cytotoxicity against HT-29 cells revealed significant reduction of the cell growth when treated with optimized formulation indicating IPN microbeads as effective dosage form for treating colon cancer. Copyright © 2018. Published by Elsevier B.V.
Attractors in Sequence Space: Agent-Based Exploration of MHC I Binding Peptides.
Jäger, Natalie; Wisniewska, Joanna M; Hiss, Jan A; Freier, Anja; Losch, Florian O; Walden, Peter; Wrede, Paul; Schneider, Gisbert
2010-01-12
Ant Colony Optimization (ACO) is a meta-heuristic that utilizes a computational analogue of ant trail pheromones to solve combinatorial optimization problems. The size of the ant colony and the representation of the ants' pheromone trails is unique referring to the given optimization problem. In the present study, we employed ACO to generate novel peptides that stabilize MHC I protein on the plasma membrane of a murine lymphoma cell line. A jury of feedforward neural network classifiers served as fitness function for peptide design by ACO. Bioactive murine MHC I H-2K(b) stabilizing as well as nonstabilizing octapeptides were designed, synthesized and tested. These peptides reveal residue motifs that are relevant for MHC I receptor binding. We demonstrate how the performance of the implemented ACO algorithm depends on the colony size and the size of the search space. The actual peptide design process by ACO constitutes a search path in sequence space that can be visualized as trajectories on a self-organizing map (SOM). By projecting the sequence space on a SOM we visualize the convergence of the different solutions that emerge during the optimization process in sequence space. The SOM representation reveals attractors in sequence space for MHC I binding peptides. The combination of ACO and SOM enables systematic peptide optimization. This technique allows for the rational design of various types of bioactive peptides with minimal experimental effort. Here, we demonstrate its successful application to the design of MHC-I binding and nonbinding peptides which exhibit substantial bioactivity in a cell-based assay. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
StreamSqueeze: a dynamic stream visualization for monitoring of event data
NASA Astrophysics Data System (ADS)
Mansmann, Florian; Krstajic, Milos; Fischer, Fabian; Bertini, Enrico
2012-01-01
While in clear-cut situations automated analytical solution for data streams are already in place, only few visual approaches have been proposed in the literature for exploratory analysis tasks on dynamic information. However, due to the competitive or security-related advantages that real-time information gives in domains such as finance, business or networking, we are convinced that there is a need for exploratory visualization tools for data streams. Under the conditions that new events have higher relevance and that smooth transitions enable traceability of items, we propose a novel dynamic stream visualization called StreamSqueeze. In this technique the degree of interest of recent items is expressed through an increase in size and thus recent events can be shown with more details. The technique has two main benefits: First, the layout algorithm arranges items in several lists of various sizes and optimizes the positions within each list so that the transition of an item from one list to the other triggers least visual changes. Second, the animation scheme ensures that for 50 percent of the time an item has a static screen position where reading is most effective and then continuously shrinks and moves to the its next static position in the subsequent list. To demonstrate the capability of our technique, we apply it to large and high-frequency news and syslog streams and show how it maintains optimal stability of the layout under the conditions given above.
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
Tiley, J S; Viswanathan, G B; Shiveley, A; Tschopp, M; Srinivasan, R; Banerjee, R; Fraser, H L
2010-08-01
Precipitates of the ordered L1(2) gamma' phase (dispersed in the face-centered cubic or FCC gamma matrix) were imaged in Rene 88 DT, a commercial multicomponent Ni-based superalloy, using energy-filtered transmission electron microscopy (EFTEM). Imaging was performed using the Cr, Co, Ni, Ti and Al elemental L-absorption edges in the energy loss spectrum. Manual and automated segmentation procedures were utilized for identification of precipitate boundaries and measurement of precipitate sizes. The automated region growing technique for precipitate identification in images was determined to measure accurately precipitate diameters. In addition, the region growing technique provided a repeatable method for optimizing segmentation techniques for varying EFTEM conditions. (c) 2010 Elsevier Ltd. All rights reserved.
AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS
Mirkovic, Dragan; Pettitt, B. Montgomery; Johnsson, S. Lennart
2009-01-01
The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines. PMID:19763233
Optimal placement of actuators and sensors in control augmented structural optimization
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A., Jr.
1990-01-01
A control-augmented structural synthesis methodology is presented in which actuator and sensor placement is treated in terms of (0,1) variables. Structural member sizes and control variables are treated simultaneously as design variables. A multiobjective utopian approach is used to obtain a compromise solution for inherently conflicting objective functions such as strucutal mass control effort and number of actuators. Constraints are imposed on transient displacements, natural frequencies, actuator forces and dynamic stability as well as controllability and observability of the system. The combinatorial aspects of the mixed - (0,1) continuous variable design optimization problem are made tractable by combining approximation concepts with branch and bound techniques. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure set forth.
Defect design of insulation systems for photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.
1981-01-01
A defect-design approach to sizing electrical insulation systems for terrestrial photovoltaic modules is presented. It consists of gathering voltage-breakdown statistics on various thicknesses of candidate insulation films where, for a designated voltage, module failure probabilities for enumerated thickness and number-of-layer film combinations are calculated. Cost analysis then selects the most economical insulation system. A manufacturing yield problem is solved to exemplify the technique. Results for unaged Mylar suggest using fewer layers of thicker films. Defect design incorporates effects of flaws in optimal insulation system selection, and obviates choosing a tolerable failure rate, since the optimization process accomplishes that. Exposure to weathering and voltage stress reduces the voltage-withstanding capability of module insulation films. Defect design, applied to aged polyester films, promises to yield reliable, cost-optimal insulation systems.
Surgical Site Infiltration for Abdominal Surgery: A Novel Neuroanatomical-based Approach
Janis, Jeffrey E.; Haas, Eric M.; Ramshaw, Bruce J.; Nihira, Mikio A.; Dunkin, Brian J.
2016-01-01
Background: Provision of optimal postoperative analgesia should facilitate postoperative ambulation and rehabilitation. An optimal multimodal analgesia technique would include the use of nonopioid analgesics, including local/regional analgesic techniques such as surgical site local anesthetic infiltration. This article presents a novel approach to surgical site infiltration techniques for abdominal surgery based upon neuroanatomy. Methods: Literature searches were conducted for studies reporting the neuroanatomical sources of pain after abdominal surgery. Also, studies identified by preceding search were reviewed for relevant publications and manually retrieved. Results: Based on neuroanatomy, an optimal surgical site infiltration technique would consist of systematic, extensive, meticulous administration of local anesthetic into the peritoneum (or preperitoneum), subfascial, and subdermal tissue planes. The volume of local anesthetic would depend on the size of the incision such that 1 to 1.5 mL is injected every 1 to 2 cm of surgical incision per layer. It is best to infiltrate with a 22-gauge, 1.5-inch needle. The needle is inserted approximately 0.5 to 1 cm into the tissue plane, and local anesthetic solution is injected while slowly withdrawing the needle, which should reduce the risk of intravascular injection. Conclusions: Meticulous, systematic, and extensive surgical site local anesthetic infiltration in the various tissue planes including the peritoneal, musculofascial, and subdermal tissues, where pain foci originate, provides excellent postoperative pain relief. This approach should be combined with use of other nonopioid analgesics with opioids reserved for rescue. Further well-designed studies are necessary to assess the analgesic efficacy of the proposed infiltration technique. PMID:28293525
2012-01-01
Background Nanoparticle based delivery of anticancer drugs have been widely investigated. However, a very important process for Research & Development in any pharmaceutical industry is scaling nanoparticle formulation techniques so as to produce large batches for preclinical and clinical trials. This process is not only critical but also difficult as it involves various formulation parameters to be modulated all in the same process. Methods In our present study, we formulated curcumin loaded poly (lactic acid-co-glycolic acid) nanoparticles (PLGA-CURC). This improved the bioavailability of curcumin, a potent natural anticancer drug, making it suitable for cancer therapy. Post formulation, we optimized our process by Reponse Surface Methodology (RSM) using Central Composite Design (CCD) and scaled up the formulation process in four stages with final scale-up process yielding 5 g of curcumin loaded nanoparticles within the laboratory setup. The nanoparticles formed after scale-up process were characterized for particle size, drug loading and encapsulation efficiency, surface morphology, in vitro release kinetics and pharmacokinetics. Stability analysis and gamma sterilization were also carried out. Results Results revealed that that process scale-up is being mastered for elaboration to 5 g level. The mean nanoparticle size of the scaled up batch was found to be 158.5 ± 9.8 nm and the drug loading was determined to be 10.32 ± 1.4%. The in vitro release study illustrated a slow sustained release corresponding to 75% drug over a period of 10 days. The pharmacokinetic profile of PLGA-CURC in rats following i.v. administration showed two compartmental model with the area under the curve (AUC0-∞) being 6.139 mg/L h. Gamma sterilization showed no significant change in the particle size or drug loading of the nanoparticles. Stability analysis revealed long term physiochemical stability of the PLGA-CURC formulation. Conclusions A successful effort towards formulating, optimizing and scaling up PLGA-CURC by using Solid-Oil/Water emulsion technique was demonstrated. The process used CCD-RSM for optimization and further scaled up to produce 5 g of PLGA-CURC with almost similar physicochemical characteristics as that of the primary formulated batch. PMID:22937885
Ranjan, Amalendu P; Mukerjee, Anindita; Helson, Lawrence; Vishwanatha, Jamboor K
2012-08-31
Nanoparticle based delivery of anticancer drugs have been widely investigated. However, a very important process for Research & Development in any pharmaceutical industry is scaling nanoparticle formulation techniques so as to produce large batches for preclinical and clinical trials. This process is not only critical but also difficult as it involves various formulation parameters to be modulated all in the same process. In our present study, we formulated curcumin loaded poly (lactic acid-co-glycolic acid) nanoparticles (PLGA-CURC). This improved the bioavailability of curcumin, a potent natural anticancer drug, making it suitable for cancer therapy. Post formulation, we optimized our process by Reponse Surface Methodology (RSM) using Central Composite Design (CCD) and scaled up the formulation process in four stages with final scale-up process yielding 5 g of curcumin loaded nanoparticles within the laboratory setup. The nanoparticles formed after scale-up process were characterized for particle size, drug loading and encapsulation efficiency, surface morphology, in vitro release kinetics and pharmacokinetics. Stability analysis and gamma sterilization were also carried out. Results revealed that that process scale-up is being mastered for elaboration to 5 g level. The mean nanoparticle size of the scaled up batch was found to be 158.5±9.8 nm and the drug loading was determined to be 10.32±1.4%. The in vitro release study illustrated a slow sustained release corresponding to 75% drug over a period of 10 days. The pharmacokinetic profile of PLGA-CURC in rats following i.v. administration showed two compartmental model with the area under the curve (AUC0-∞) being 6.139 mg/L h. Gamma sterilization showed no significant change in the particle size or drug loading of the nanoparticles. Stability analysis revealed long term physiochemical stability of the PLGA-CURC formulation. A successful effort towards formulating, optimizing and scaling up PLGA-CURC by using Solid-Oil/Water emulsion technique was demonstrated. The process used CCD-RSM for optimization and further scaled up to produce 5 g of PLGA-CURC with almost similar physicochemical characteristics as that of the primary formulated batch.
Design optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos
1991-01-01
The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.
Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.
2014-01-01
Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068
Adaptive Batch Mode Active Learning.
Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman
2015-08-01
Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
NASA Astrophysics Data System (ADS)
Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.
2018-03-01
Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.
Johnson, Perry B; Monterroso, Maria I; Yang, Fei; Mellon, Eric
2017-11-25
This work explores how the choice of prescription isodose line (IDL) affects the dose gradient, target coverage, and treatment time for Gamma Knife radiosurgery when a smaller shot is encompassed within a larger shot at the same stereotactic coordinates (shot within shot technique). Beam profiles for the 4, 8, and 16 mm collimator settings were extracted from the treatment planning system and characterized using Gaussian fits. The characterized data were used to create over 10,000 shot within shot configurations by systematically changing collimator weighting and choice of prescription IDL. Each configuration was quantified in terms of the dose gradient, target coverage, and beam-on time. By analyzing these configurations, it was found that there are regions of overlap in target size where a higher prescription IDL provides equivalent dose fall-off to a plan prescribed at the 50% IDL. Furthermore, the data indicate that treatment times within these regions can be reduced by up to 40%. An optimization strategy was devised to realize these gains. The strategy was tested for seven patients treated for 1-4 brain metastases (20 lesions total). For a single collimator setting, the gradient in the axial plane was steepest when prescribed to the 56-63% (4 mm), 62-70% (8 mm), and 77-84% (16 mm) IDL, respectively. Through utilization of the optimization technique, beam-on time was reduced by more than 15% in 16/20 lesions. The volume of normal brain receiving 12 Gy or above also decreased in many cases, and in only one instance increased by more than 0.5 cm 3 . This work demonstrates that IDL optimization using the shot within shot technique can reduce treatment times without degrading treatment plan quality.
Chen, Zhenning; Shao, Xinxing; Xu, Xiangyang; He, Xiaoyuan
2018-02-01
The technique of digital image correlation (DIC), which has been widely used for noncontact deformation measurements in both the scientific and engineering fields, is greatly affected by the quality of speckle patterns in terms of its performance. This study was concerned with the optimization of the digital speckle pattern (DSP) for DIC in consideration of both the accuracy and efficiency. The root-mean-square error of the inverse compositional Gauss-Newton algorithm and the average number of iterations were used as quality metrics. Moreover, the influence of subset sizes and the noise level of images, which are the basic parameters in the quality assessment formulations, were also considered. The simulated binary speckle patterns were first compared with the Gaussian speckle patterns and captured DSPs. Both the single-radius and multi-radius DSPs were optimized. Experimental tests and analyses were conducted to obtain the optimized and recommended DSP. The vector diagram of the optimized speckle pattern was also uploaded as reference.
Optimal planning and design of a renewable energy based supply system for microgrids
Hafez, Omar; Bhattacharya, Kankar
2012-03-03
This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less
Microfabrication of three-dimensional filters for liposome extrusion
NASA Astrophysics Data System (ADS)
Baldacchini, Tommaso; Nuñez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben
2015-03-01
Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Chen, Yu; Dong, Fengqing; Wang, Yonghong
2016-09-01
With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.
NASA Technical Reports Server (NTRS)
Welstead, Jason; Crouse, Gilbert L., Jr.
2014-01-01
Empirical sizing guidelines such as tail volume coefficients have long been used in the early aircraft design phases for sizing stabilizers, resulting in conservatively stable aircraft. While successful, this results in increased empty weight, reduced performance, and greater procurement and operational cost relative to an aircraft with optimally sized surfaces. Including flight dynamics in the conceptual design process allows the design to move away from empirical methods while implementing modern control techniques. A challenge of flight dynamics and control is the numerous design variables, which are changing fluidly throughout the conceptual design process, required to evaluate the system response to some disturbance. This research focuses on addressing that challenge not by implementing higher order tools, such as computational fluid dynamics, but instead by linking the lower order tools typically used within the conceptual design process so each discipline feeds into the other. In thisresearch, flight dynamics and control was incorporated into the conceptual design process along with the traditional disciplines of vehicle sizing, weight estimation, aerodynamics, and performance. For the controller, a linear quadratic regulator structure with constant gains has been specified to reduce the user input. Coupling all the disciplines in the conceptual design phase allows the aircraft designer to explore larger design spaces where stabilizers are sized according to dynamic response constraints rather than historical static margin and volume coefficient guidelines.
Dorati, Rossella; DeTrizio, Antonella; Spalla, Melissa; Migliavacca, Roberta; Pagani, Laura; Pisani, Silvia; Chiesa, Enrica; Modena, Tiziana; Genta, Ida
2018-01-01
Nanotechnology is a promising approach both for restoring or enhancing activity of old and conventional antimicrobial agents and for treating intracellular infections by providing intracellular targeting and sustained release of drug inside infected cells. The present paper introduces a formulation study of gentamicin loaded biodegradable nanoparticles (Nps). Solid-oil-in water technique was studied for gentamicin sulfate nanoencapsulation using uncapped Polylactide-co-glycolide (PLGA-H) and Polylactide-co-glycolide-co-Polyethylenglycol (PLGA-PEG) blends. Screening design was applied to optimize: drug payload, Nps size and size distribution, stability and resuspendability after freeze-drying. PLGA-PEG concentration resulted most significant factor influencing particles size and drug content (DC): 8 w/w% DC and 200 nm Nps were obtained. Stirring rate resulted most influencing factor for size distribution (PDI): 700 rpm permitted to obtain homogeneous Nps dispersion (PDI = 1). Further experimental parameters investigated, by 23 screening design, were: polymer blend composition (PLGA-PEG and PLGA-H), Polyvinylalcohol (PVA) and methanol concentrations into aqueous phase. Drug content was increased to 10.5 w/w%. Nanoparticle lyophilization was studied adding cryoprotectants, polyvinypirrolidone K17 and K32, and sodiumcarboxymetylcellulose. Freeze-drying protocol was optimized by a mixture design. A freeze-dried Nps powder free resuspendable with stable Nps size and payload, was developed. The powder was tested on clinic bacterial isolates demonstrating that after encapsulation, gentamicin sulfate kept its activity. PMID:29329209
Realm of Thermoalkaline Lipases in Bioprocess Commodities.
Lajis, Ahmad Firdaus B
2018-01-01
For decades, microbial lipases are notably used as biocatalysts and efficiently catalyze various processes in many important industries. Biocatalysts are less corrosive to industrial equipment and due to their substrate specificity and regioselectivity they produced less harmful waste which promotes environmental sustainability. At present, thermostable and alkaline tolerant lipases have gained enormous interest as biocatalyst due to their stability and robustness under high temperature and alkaline environment operation. Several characteristics of the thermostable and alkaline tolerant lipases are discussed. Their molecular weight and resistance towards a range of temperature, pH, metal, and surfactants are compared. Their industrial applications in biodiesel, biodetergents, biodegreasing, and other types of bioconversions are also described. This review also discusses the advance of fermentation process for thermostable and alkaline tolerant lipases production focusing on the process development in microorganism selection and strain improvement, culture medium optimization via several optimization techniques (i.e., one-factor-at-a-time, surface response methodology, and artificial neural network), and other fermentation parameters (i.e., inoculums size, temperature, pH, agitation rate, dissolved oxygen tension (DOT), and aeration rate). Two common fermentation techniques for thermostable and alkaline tolerant lipases production which are solid-state and submerged fermentation methods are compared and discussed. Recent optimization approaches using evolutionary algorithms (i.e., Genetic Algorithm, Differential Evolution, and Particle Swarm Optimization) are also highlighted in this article.
Multi-hop path tracing of mobile robot with multi-range image
NASA Astrophysics Data System (ADS)
Choudhury, Ramakanta; Samal, Chandrakanta; Choudhury, Umakanta
2010-02-01
It is well known that image processing depends heavily upon image representation technique . This paper intends to find out the optimal path of mobile robots for a specified area where obstacles are predefined as well as modified. Here the optimal path is represented by using the Quad tree method. Since there has been rising interest in the use of quad tree, we have tried to use the successive subdivision of images into quadrants from which the quad tree is developed. In the quad tree, obstacles-free area and the partial filled area are represented with different notations. After development of quad tree the algorithm is used to find the optimal path by employing neighbor finding technique, with a view to move the robot from the source to destination. The algorithm, here , permeates through the entire tree, and tries to locate the common ancestor for computation. The computation and the algorithm, aim at easing the ability of the robot to trace the optimal path with the help of adjacencies between the neighboring nodes as well as determining such adjacencies in the horizontal, vertical and diagonal directions. In this paper efforts have been made to determine the movement of the adjacent block in the quad tree and to detect the transition between the blocks equal size and finally generate the result.
Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik
2012-12-01
Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding,more » dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.« less
Complex fluid flow and heat transfer analysis inside a calandria based reactor using CFD technique
NASA Astrophysics Data System (ADS)
Kulkarni, P. S.
2017-04-01
Series of numerical experiments have been carried out on a calandria based reactor for optimizing the design to increase the overall heat transfer efficiency by using Computational Fluid Dynamic (CFD) technique. Fluid flow and heat transfer inside the calandria is governed by many geometric and flow parameters like orientation of inlet, inlet mass flow rate, fuel channel configuration (in-line, staggered, etc.,), location of inlet and outlet, etc.,. It was well established that heat transfer is more wherever forced convection dominates but for geometries like calandria it is very difficult to achieve forced convection flow everywhere, intern it strongly depends on the direction of inlet jet. In the present paper the initial design was optimized with respect to inlet jet angle, the optimized design has been numerically tested for different heat load mass flow conditions. To further increase the heat removal capacity of a calandria, further numerical studies has been carried out for different inlet geometry. In all the analysis same overall geometry size and same number of tubes has been considered. The work gives good insight into the fluid flow and heat transfer inside the calandria and offer a guideline for optimizing the design and/or capacity enhancement of a present design.
Amstutz, Harlan C; Takamura, Karren M; Le Duff, Michel J
2011-04-01
The results of metal-on-metal hip Conserve® Plus resurfacings with up to 14 years of follow-up with and without risk factors of small component size and/or large femoral defects were compared as performed with either first- or second-generation surgical techniques. There was a 99.7% survivorship at ten years for ideal hips (large components and small defects) and a 95.3% survivorship for hips with risk factors optimized technique has measurably improved durability in patients with risk factors at the 8-year mark. The lessons learned can help offset the observed learning curve of resurfacing. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Cell separation using tilted-angle standing surface acoustic waves
Ding, Xiaoyun; Peng, Zhangli; Lin, Sz-Chin Steven; Geri, Michela; Li, Sixing; Li, Peng; Chen, Yuchao; Dao, Ming; Suresh, Subra; Huang, Tony Jun
2014-01-01
Separation of cells is a critical process for studying cell properties, disease diagnostics, and therapeutics. Cell sorting by acoustic waves offers a means to separate cells on the basis of their size and physical properties in a label-free, contactless, and biocompatible manner. The separation sensitivity and efficiency of currently available acoustic-based approaches, however, are limited, thereby restricting their widespread application in research and health diagnostics. In this work, we introduce a unique configuration of tilted-angle standing surface acoustic waves (taSSAW), which are oriented at an optimally designed inclination to the flow direction in the microfluidic channel. We demonstrate that this design significantly improves the efficiency and sensitivity of acoustic separation techniques. To optimize our device design, we carried out systematic simulations of cell trajectories, matching closely with experimental results. Using numerically optimized design of taSSAW, we successfully separated 2- and 10-µm-diameter polystyrene beads with a separation efficiency of ∼99%, and separated 7.3- and 9.9-µm-polystyrene beads with an efficiency of ∼97%. We illustrate that taSSAW is capable of effectively separating particles–cells of approximately the same size and density but different compressibility. Finally, we demonstrate the effectiveness of the present technique for biological–biomedical applications by sorting MCF-7 human breast cancer cells from nonmalignant leukocytes, while preserving the integrity of the separated cells. The method introduced here thus offers a unique route for separating circulating tumor cells, and for label-free cell separation with potential applications in biological research, disease diagnostics, and clinical practice. PMID:25157150
Cell separation using tilted-angle standing surface acoustic waves.
Ding, Xiaoyun; Peng, Zhangli; Lin, Sz-Chin Steven; Geri, Michela; Li, Sixing; Li, Peng; Chen, Yuchao; Dao, Ming; Suresh, Subra; Huang, Tony Jun
2014-09-09
Separation of cells is a critical process for studying cell properties, disease diagnostics, and therapeutics. Cell sorting by acoustic waves offers a means to separate cells on the basis of their size and physical properties in a label-free, contactless, and biocompatible manner. The separation sensitivity and efficiency of currently available acoustic-based approaches, however, are limited, thereby restricting their widespread application in research and health diagnostics. In this work, we introduce a unique configuration of tilted-angle standing surface acoustic waves (taSSAW), which are oriented at an optimally designed inclination to the flow direction in the microfluidic channel. We demonstrate that this design significantly improves the efficiency and sensitivity of acoustic separation techniques. To optimize our device design, we carried out systematic simulations of cell trajectories, matching closely with experimental results. Using numerically optimized design of taSSAW, we successfully separated 2- and 10-µm-diameter polystyrene beads with a separation efficiency of ∼ 99%, and separated 7.3- and 9.9-µm-polystyrene beads with an efficiency of ∼ 97%. We illustrate that taSSAW is capable of effectively separating particles-cells of approximately the same size and density but different compressibility. Finally, we demonstrate the effectiveness of the present technique for biological-biomedical applications by sorting MCF-7 human breast cancer cells from nonmalignant leukocytes, while preserving the integrity of the separated cells. The method introduced here thus offers a unique route for separating circulating tumor cells, and for label-free cell separation with potential applications in biological research, disease diagnostics, and clinical practice.
NASA Technical Reports Server (NTRS)
Judge, Russell A.; Snell, Edward H.
1999-01-01
Part of the challenge of macromolecular crystal growth for structure determination is obtaining an appropriate number of crystals with a crystal volume suitable for X-ray analysis. In this respect an understanding of the effect of solution conditions on macromolecule nucleation rates is advantageous. This study investigated the effects of solution conditions on the nucleation rate and final crystal size of two crystal systems; tetragonal lysozyme and glucose isomerase. Batch crystallization plates were prepared at given solution concentration and incubated at set temperatures over one week. The number of crystals per well with their size and axial ratios were recorded and correlated with solution conditions. Duplicate experiments indicate the reproducibility of the technique. Results for each system showing the effect of supersaturation, incubation temperature and solution pH on nucleation rates will be presented and discussed. In the case of lysozyme, having optimized solution conditions to produce an appropriate number of crystals of a suitable size, a batch of crystals were prepared under exactly the same conditions. Fifty of these crystals were analyzed by x-ray techniques. The results indicate that even under the same crystallization conditions, a marked variation in crystal properties exists.
Efficiency and optimal size of hospitals: Results of a systematic search
Guglielmo, Annamaria
2017-01-01
Background National Health Systems managers have been subject in recent years to considerable pressure to increase concentration and allow mergers. This pressure has been justified by a belief that larger hospitals lead to lower average costs and better clinical outcomes through the exploitation of economies of scale. In this context, the opportunity to measure scale efficiency is crucial to address the question of optimal productive size and to manage a fair allocation of resources. Methods and findings This paper analyses the stance of existing research on scale efficiency and optimal size of the hospital sector. We performed a systematic search of 45 past years (1969–2014) of research published in peer-reviewed scientific journals recorded by the Social Sciences Citation Index concerning this topic. We classified articles by the journal’s category, research topic, hospital setting, method and primary data analysis technique. Results showed that most of the studies were focussed on the analysis of technical and scale efficiency or on input / output ratio using Data Envelopment Analysis. We also find increasing interest concerning the effect of possible changes in hospital size on quality of care. Conclusions Studies analysed in this review showed that economies of scale are present for merging hospitals. Results supported the current policy of expanding larger hospitals and restructuring/closing smaller hospitals. In terms of beds, studies reported consistent evidence of economies of scale for hospitals with 200–300 beds. Diseconomies of scale can be expected to occur below 200 beds and above 600 beds. PMID:28355255
Impact of implant size on cement filling in hip resurfacing arthroplasty.
de Haan, Roel; Buls, Nico; Scheerlinck, Thierry
2014-01-01
Larger proportions of cement within femoral resurfacing implants might result in thermal bone necrosis. We postulate that smaller components are filled with proportionally more cement, causing an elevated failure rate. A total of 19 femoral heads were fitted with polymeric replicas of ReCap (Biomet) resurfacing components fixed with low-viscosity cement. Two specimens were used for each even size between 40 and 56 mm and one for size 58 mm. All specimens were imaged with computed tomography, and the cement thickness and bone density were analyzed. The average cement mantle thickness was 2.63 mm and was not correlated with the implant size. However, specimen with low bone density had thicker cement mantles regardless of size. The average filling index was 36.65% and was correlated to both implant size and bone density. Smaller implants and specimens with lower bone density contained proportionally more cement than larger implants. According to a linear regression model, bone density but not implant size influenced cement thickness. However, both implant size and bone density had a significant impact on the filling index. Large proportions of cement within the resurfacing head have the potential to generate thermal bone necrosis and implant failure. When considering hip resurfacing in patients with a small femoral head and/or osteoporotic bone, extra care should be taken to avoid thermal bone necrosis, and alternative cementing techniques or even cementless implants should be considered. This study should help delimiting the indications for hip resurfacing and to choose an optimal cementing technique taking implant size into account.
Parameter Optimization for Turbulent Reacting Flows Using Adjoints
NASA Astrophysics Data System (ADS)
Lapointe, Caelan; Hamlington, Peter E.
2017-11-01
The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Weight optimization of plane truss using genetic algorithm
NASA Astrophysics Data System (ADS)
Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay
2017-11-01
Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Minimum-sized ideal reactor for continuous alcohol fermentation using immobilized microorganism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamane, T.; Shimizu, S.
Recently, alcohol fermentation has gained considerable attention with the aim of lowering its production cost in the production processes of both fuel ethanol and alcoholic beverages. The over-all cost is a summation of costs of various subsystems such as raw material (sugar, starch, and cellulosic substances) treatment, fermentation process, and alcohol separation from water solutions; lowering the cost of the fermentation processes is very important in lowering the total cost. Several new techniques have been developed for economic continuous ethanol production, use of a continuous wine fermentor with no mechanical stirring, cell recycle combined with continuous removal of ethanol undermore » vaccum, a technique involving a bed of yeast admixed with an inert carrier, and use of immobilized yeast reactors in packed-bed column and in a three-stage double conical fluidized-bed bioreactor. All these techniques lead to increases more or less, in reactor productivity, which in turn result in the reduction of the reactor size for a given production rate and a particular conversion. Since an improvement in the fermentation process often leads to a reduction of fermentor size and hence, a lowering of the initial construction cost, it is important to theoretically arrive at a solution to what is the minimum-size setup of ideal reactors from the viewpoint of liquid backmixing. In this short communication, the minimum-sized ideal reactor for continuous alcohol fermentation using immobilized cells will be specifically discussed on the basis of a mathematical model. The solution will serve for designing an optimal bioreactor. (Refs. 26).« less
Adapted all-numerical correlator for face recognition applications
NASA Astrophysics Data System (ADS)
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
NASA Astrophysics Data System (ADS)
Wawrzynczyk, Dominika; Szeremeta, Janusz; Samoc, Marek; Nyk, Marcin
2015-11-01
Spectrally resolved nonlinear optical properties of colloidal InP@ZnS core-shell quantum dots of various sizes were investigated with the Z-scan technique and two-photon fluorescence excitation method using a femtosecond laser system tunable in the range from 750 nm to 1600 nm. In principle, both techniques should provide comparable results and can be interchangeably used for determination of the nonlinear optical absorption parameters, finding maximal values of the cross sections and optimizing them. We have observed slight differences between the two-photon absorption cross sections measured by the two techniques and attributed them to the presence of non-radiative paths of absorption or relaxation. The most significant value of two-photon absorption cross section σ2 for 4.3 nm size InP@ZnS quantum dot was equal to 2200 GM, while the two-photon excitation action cross section σ2Φ was found to be 682 GM at 880 nm. The properties of these cadmium-free colloidal quantum dots can be potentially useful for nonlinear bioimaging.
Phase-contrast x-ray computed tomography for biological imaging
NASA Astrophysics Data System (ADS)
Momose, Atsushi; Takeda, Tohoru; Itai, Yuji
1997-10-01
We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.
Fluorescence hyperspectral imaging technique for foreign substance detection on fresh-cut lettuce.
Mo, Changyeun; Kim, Giyoung; Kim, Moon S; Lim, Jongguk; Cho, Hyunjeong; Barnaby, Jinyoung Yang; Cho, Byoung-Kwan
2017-09-01
Non-destructive methods based on fluorescence hyperspectral imaging (HSI) techniques were developed to detect worms on fresh-cut lettuce. The optimal wavebands for detecting the worms were investigated using the one-way ANOVA and correlation analyses. The worm detection imaging algorithms, RSI-I (492-626)/492 , provided a prediction accuracy of 99.0%. The fluorescence HSI techniques indicated that the spectral images with a pixel size of 1 × 1 mm had the best classification accuracy for worms. The overall results demonstrate that fluorescence HSI techniques have the potential to detect worms on fresh-cut lettuce. In the future, we will focus on developing a multi-spectral imaging system to detect foreign substances such as worms, slugs and earthworms on fresh-cut lettuce. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
High voltage spark carbon fiber detection system
NASA Technical Reports Server (NTRS)
Yang, L. C.
1980-01-01
The pulse discharge technique was used to determine the length and density of carbon fibers released from fiber composite materials during a fire or aircraft accident. Specifications are given for the system which uses the ability of a carbon fiber to initiate spark discharge across a high voltage biased grid to achieve accurate counting and sizing of fibers. The design of the system was optimized, and prototype hardware proved satisfactory in laboratory and field tests.
Low Cost High Performance Phased Array Antennas with Beam Steering Capabilities
2009-12-01
characteristics of BSTO, the RF vacuum sputtering technique has been used and we investigated effects of sputtering parameters such as substrate...sputtering parameters , various sets of BSTO films have been deposited on different substrates and various size of CPW phase shifters have been fabricated...measurement of phase shifter 18 4. Optimization of the sputtering parameters for BSTO deposition 19 4.1 The first BSTO film sample 20 4.2 The second BSTO
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Shah, Viral H; Jobanputra, Amee
2018-01-01
The present investigation focused on developing, optimizing, and evaluating a novel liposome-loaded nail lacquer formulation for increasing the transungual permeation flux of terbinafine HCl for efficient treatment of onychomycosis. A three-factor, three-level, Box-Behnken design was employed for optimizing process and formulation parameters of liposomal formulation. Liposomes were formulated by thin film hydration technique followed by sonication. Drug to lipid ratio, sonication amplitude, and sonication time were screened as independent variables while particle size, PDI, entrapment efficiency, and zeta potential were selected as quality attributes for liposomal formulation. Multiple regression analysis was employed to construct a second-order quadratic polynomial equation and contour plots. Design space (overlay plot) was generated to optimize a liposomal system, with software-suggested levels of independent variables that could be transformed to desired responses. The optimized liposome formulation was characterized and dispersed in nail lacquer which was further evaluated for different parameters. Results depicted that the optimized terbinafine HCl-loaded liposome formulation exhibited particle size of 182 nm, PDI of 0.175, zeta potential of -26.8 mV, and entrapment efficiency of 80%. Transungual permeability flux of terbinafine HCl through liposome-dispersed nail lacquer formulation was observed to be significantly higher in comparison to nail lacquer with a permeation enhancer. The developed formulation was also observed to be as efficient as pure drug dispersion in its antifungal activity. Thus, it was concluded that the developed formulation can serve as an efficient tool for enhancing the permeability of terbinafine HCl across human nail plate thereby improving its therapeutic efficiency.
Tang, Pei Fang
2011-01-01
Stroke is a leading cause of long-term disability. Impairments resulting from stroke lead to persistent difficulties with walking and subsequently, improved walking ability is one of the highest priorities for people living with a stroke. In addition, walking ability has important health implications in providing protective effects against secondary complications common after a stroke such as heart disease or osteoporosis. This paper systematically reviews common gait training strategies (neurodevelopmental techniques, muscle strengthening, treadmill training, intensive mobility exercises) to improve walking ability. The results (descriptive summaries as well as pooled effect sizes) from randomized controlled trials are presented and implications for optimal gait training strategies are discussed. Novel and emerging gait training strategies are highlighted and research directions proposed to enable the optimal recovery and maintenance of walking ability. PMID:17939776
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
Effects of hierarchical structures and insulating liquid media on adhesion
NASA Astrophysics Data System (ADS)
Yang, Weixu; Wang, Xiaoli; Li, Hanqing; Song, Xintao
2017-11-01
Effects of hierarchical structures and insulating liquid media on adhesion are investigated through a numerical adhesive contact model established in this paper, in which hierarchical structures are considered by introducing the height distribution into the surface gap equation, and media are taken into account through the Hamaker constant in Lifshitz-Hamaker approach. Computational methods such as inexact Newton method, bi-conjugate stabilized (Bi-CGSTAB) method and fast Fourier transform (FFT) technique are employed to obtain the adhesive force. It is shown that hierarchical structured surface exhibits excellent anti-adhesive properties compared with flat, micro or nano structured surfaces. Adhesion force is more dependent on the sizes of nanostructures than those of microstructures, and the optimal ranges of nanostructure pitch and maximum height for small adhesion force are presented. Insulating liquid media effectively decrease the adhesive interaction and 1-bromonaphthalene exhibits the smallest adhesion force among the five selected media. In addition, effects of hierarchical structures with optimal sizes on reducing adhesion are more obvious than those of the selected insulating liquid media.
Dynamic resource allocation in conservation planning
Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.
2011-01-01
Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A
2008-06-01
The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
Sukhbir, S; Yashpal, S; Sandeep, A
2016-09-01
Nefopam hydrochloride (NFH) is a non-opioid centrally acting analgesic drug used to treat chronic condition such as neuropathic pain. In current research, sustained release nefopam hydrochloride loaded nanospheres (NFH-NS) were auspiciously synthesized using binary mixture of eudragit RL 100 and RS 100 with sorbitan monooleate as surfactant by quasi solvent diffusion technique and optimized by 3 5 Box-Behnken designs to evaluate the effects of process and formulation variables. Fourier transform infrared spectroscopy (FTIR), differential scanning calorimetric (DSC) and X-ray diffraction (XRD) affirmed absence of drug-polymer incompatibility and confirmed formation of nanospheres. Desirability function scrutinized by design-expert software for optimized formulation was 0.920. Optimized batch of NFH-NS had mean particle size 328.36 nm ± 2.23, % entrapment efficiency (% EE) 84.97 ± 1.23, % process yield 83.60 ± 1.31 and % drug loading (% DL) 21.41 ± 0.89. Dynamic light scattering (DLS), zeta potential analysis and scanning electron microscopy (SEM) validated size, charge and shape of nanospheres, respectively. In-vitro drug release study revealed biphasic release pattern from optimized nanospheres. Korsmeyer Peppas found excellent kinetics model with release exponent less than 0.45. Chronic constricted injury (CCI) model of optimized NFH-NS in Wistar rats produced significant difference in neuropathic pain behavior ( p < 0.05) as compared to free NFH over 10 h indicating sustained action. Long term and accelerated stability testing of optimized NFH-NS revealed degradation rate constant 1.695 × 10 -4 and shelf-life 621 days at 25 ± 2 °C/60% ± 5% RH.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
The provision of clearances accuracy in piston - cylinder mating
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.
2017-08-01
The paper is aimed at increasing the quality of the pumping equipment in oil and gas industry. The main purpose of the study is to stabilize maximum values of productivity and durability of the pumping equipment based on the selective assembly of the cylinder-piston kinematic mating by optimization criterion. It is shown that the minimum clearance in the piston-cylinder mating is formed by maximum material dimensions. It is proved that maximum material dimensions are characterized by their own laws of distribution within the tolerance limits for the diameters of the cylinder internal mirror and the outer cylindrical surface of the piston. At that, their dispersion zones should be divided into size groups with a group tolerance equal to half the tolerance for the minimum clearance. The techniques for measuring the material dimensions - the smallest cylinder diameter and the largest piston diameter according to the envelope condition - are developed for sorting them into size groups. Reliable control of the dimensions precision ensures optimal minimum clearances of the piston-cylinder mating in all the size groups of the pumping equipment, necessary for increasing the equipment productivity and durability during the production, operation and repair processes.
Design of pressure-sensing diaphragm for MEMS capacitance diaphragm gauge considering size effect
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Detian; Cheng, Yongjun; Sun, Wenjun; Han, Xiaodong; Wang, Chengxiang
2018-03-01
MEMS capacitance diaphragm gauge with a full range of (1˜1000) Pa is considered for its wide application prospect. The design of pressure-sensing diaphragm is the key to achieve balanced performance for this kind of gauges. The optimization process of the pressure-sensing diaphragm with island design of a capacitance diaphragm gauge based on MEMS technique has been reported in this work. For micro-components in micro scale range, mechanical properties are very different from that in the macro scale range, so the size effect should not be ignored. The modified strain gradient elasticity theory considering size effect has been applied to determine the bending rigidity of the pressure-sensing diaphragm, which is then used in the numerical model to calculate the deflection-pressure relation of the diaphragm. According to the deflection curves, capacitance variation can be determined by integrating over the radius of the diaphragm. At last, the design of the diaphragm has been optimized based on three parameters: sensitivity, linearity and ground capacitance. With this design, a full range of (1˜1000) Pa can be achieved, meanwhile, balanced sensitivity, resolution and linearity can be kept.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dongxu, E-mail: dongxu-wang@uiowa.edu; Dirksen, Blake; Hyer, Daniel E.
Purpose: To determine the plan quality of proton spot scanning (SS) radiosurgery as a function of spot size (in-air sigma) in comparison to x-ray radiosurgery for treating peripheral brain lesions. Methods: Single-field optimized (SFO) proton SS plans with sigma ranging from 1 to 8 mm, cone-based x-ray radiosurgery (Cone), and x-ray volumetric modulated arc therapy (VMAT) plans were generated for 11 patients. Plans were evaluated using secondary cancer risk and brain necrosis normal tissue complication probability (NTCP). Results: For all patients, secondary cancer is a negligible risk compared to brain necrosis NTCP. Secondary cancer risk was lower in proton SSmore » plans than in photon plans regardless of spot size (p = 0.001). Brain necrosis NTCP increased monotonically from an average of 2.34/100 (range 0.42/100–4.49/100) to 6.05/100 (range 1.38/100–11.6/100) as sigma increased from 1 to 8 mm, compared to the average of 6.01/100 (range 0.82/100–11.5/100) for Cone and 5.22/100 (range 1.37/100–8.00/100) for VMAT. An in-air sigma less than 4.3 mm was required for proton SS plans to reduce NTCP over photon techniques for the cohort of patients studied with statistical significance (p = 0.0186). Proton SS plans with in-air sigma larger than 7.1 mm had significantly greater brain necrosis NTCP than photon techniques (p = 0.0322). Conclusions: For treating peripheral brain lesions—where proton therapy would be expected to have the greatest depth-dose advantage over photon therapy—the lateral penumbra strongly impacts the SS plan quality relative to photon techniques: proton beamlet sigma at patient surface must be small (<7.1 mm for three-beam single-field optimized SS plans) in order to achieve comparable or smaller brain necrosis NTCP relative to photon radiosurgery techniques. Achieving such small in-air sigma values at low energy (<70 MeV) is a major technological challenge in commercially available proton therapy systems.« less
Abelha, T F; Phillips, T W; Bannock, J H; Nightingale, A M; Dreiss, C A; Kemal, E; Urbano, L; deMello, J C; Green, M; Dailey, L A
2017-02-02
This study compares the performance of a microfluidic technique and a conventional bulk method to manufacture conjugated polymer nanoparticles (CPNs) embedded within a biodegradable poly(ethylene glycol) methyl ether-block-poly(lactide-co-glycolide) (PEG 5K -PLGA 55K ) matrix. The influence of PEG 5K -PLGA 55K and conjugated polymers cyano-substituted poly(p-phenylene vinylene) (CN-PPV) and poly(9,9-dioctylfluorene-2,1,3-benzothiadiazole) (F8BT) on the physicochemical properties of the CPNs was also evaluated. Both techniques enabled CPN production with high end product yields (∼70-95%). However, while the bulk technique (solvent displacement) under optimal conditions generated small nanoparticles (∼70-100 nm) with similar optical properties (quantum yields ∼35%), the microfluidic approach produced larger CPNs (140-260 nm) with significantly superior quantum yields (49-55%) and tailored emission spectra. CPNs containing CN-PPV showed smaller size distributions and tuneable emission spectra compared to F8BT systems prepared under the same conditions. The presence of PEG 5K -PLGA 55K did not affect the size or optical properties of the CPNs and provided a neutral net electric charge as is often required for biomedical applications. The microfluidics flow-based device was successfully used for the continuous preparation of CPNs over a 24 hour period. On the basis of the results presented here, it can be concluded that the microfluidic device used in this study can be used to optimize the production of bright CPNs with tailored properties with good reproducibility.
Energetic constraints, size gradients, and size limits in benthic marine invertebrates.
Sebens, Kenneth P
2002-08-01
Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.
Hazelaar, Colien; van Eijnatten, Maureen; Dahele, Max; Wolff, Jan; Forouzanfar, Tymour; Slotman, Ben; Verbakel, Wilko F A R
2018-01-01
Imaging phantoms are widely used for testing and optimization of imaging devices without the need to expose humans to irradiation. However, commercially available phantoms are commonly manufactured in simple, generic forms and sizes and therefore do not resemble the clinical situation for many patients. Using 3D printing techniques, we created a life-size phantom based on a clinical CT scan of the thorax from a patient with lung cancer. It was assembled from bony structures printed in gypsum, lung structures consisting of airways, blood vessels >1 mm, and outer lung surface, three lung tumors printed in nylon, and soft tissues represented by silicone (poured into a 3D-printed mold). Kilovoltage x-ray and CT images of the phantom closely resemble those of the real patient in terms of size, shapes, and structures. Surface comparison using 3D models obtained from the phantom and the 3D models used for printing showed mean differences <1 mm for all structures. Tensile tests of the materials used for the phantom show that the phantom is able to endure radiation doses over 24,000 Gy. It is feasible to create an anthropomorphic thorax phantom using 3D printing and molding techniques. The phantom closely resembles a real patient in terms of spatial accuracy and is currently being used to evaluate x-ray-based imaging quality and positional verification techniques for radiotherapy. © 2017 American Association of Physicists in Medicine.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Tyagi, Himanshu; Kushwaha, Ajay; Kumar, Anshuman; Aslam, Mohammed
2016-12-01
The synthesis of gold nanoparticles using citrate reduction process has been revisited. A simplified room temperature approach to standard Turkevich synthesis is employed to obtain fairly monodisperse gold nanoparticles. The role of initial pH alongside the concentration ratio of reactants is explored for the size control of Au nanoparticles. The particle size distribution has been investigated using UV-vis spectroscopy and transmission electron microscope (TEM). At optimal pH of 5, gold nanoparticles obtained are highly monodisperse and spherical in shape and have narrower size distribution (sharp surface plasmon at 520 nm). For other pH conditions, particles are non-uniform and polydisperse, showing a red-shift in plasmon peak due to aggregation and large particle size distribution. The room temperature approach results in highly stable "colloidal" suspension of gold nanoparticles. The stability test through absorption spectroscopy indicates no sign of aggregation for a month. The rate of reduction of auric ionic species by citrate ions is determined via UV absorbance studies. The size of nanoparticles under various conditions is thus predicted using a theoretical model that incorporates nucleation, growth, and aggregation processes. The faster rate of reduction yields better size distribution for optimized pH and reactant concentrations. The model involves solving population balance equation for continuously evolving particle size distribution by discretization techniques. The particle sizes estimated from the simulations (13 to 25 nm) are close to the experimental ones (10 to 32 nm) and corroborate the similarity of reaction processes at 300 and 373 K (classical Turkevich reaction). Thus, substitution of experimentally measured rate of disappearance of auric ionic species into theoretical model enables us to capture the unusual experimental observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.
2011-08-15
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less
Integrated design optimization research and development in an industrial environment
NASA Astrophysics Data System (ADS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-04-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Integrated design optimization research and development in an industrial environment
NASA Technical Reports Server (NTRS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-01-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Elsayed, Ibrahim; Sayed, Sinar
2017-01-01
Ocular drug delivery systems suffer from rapid drainage, intractable corneal permeation and short dosing intervals. Transcorneal drug permeation could increase the drug availability and efficiency in the aqueous humor. The aim of this study was to develop and optimize nanostructured formulations to provide accurate doses, long contact time and enhanced drug permeation. Nanovesicles were designed based on Box–Behnken model and prepared using the thin film hydration technique. The formed nanodispersions were evaluated by measuring the particle size, polydispersity index, zeta potential, entrapment efficiency and gelation temperature. The obtained desirability values were utilized to develop an optimized nanostructured in situ gel and insert. The optimized formulations were imaged by transmission and scanning electron microscopes. In addition, rheological characters, in vitro drug diffusion, ex vivo and in vivo permeation and safety of the optimized formulation were investigated. The optimized insert formulation was found to have a relatively lower viscosity, higher diffusion, ex vivo and in vivo permeation, when compared to the optimized in situ gel. So, the lyophilized nanostructured insert could be considered as a promising carrier and transporter for drugs across the cornea with high biocompatibility and effectiveness. PMID:29133980
El-Say, Khalid M; El-Helw, Abdel-Rahim M; Ahmed, Osama A A; Hosny, Khaled M; Ahmed, Tarek A; Kharshoum, Rasha M; Fahmy, Usama A; Alsawahli, Majed
2015-01-01
The purpose was to improve the encapsulation efficiency of cetirizine hydrochloride (CTZ) microspheres as a model for water soluble drugs and control its release by applying response surface methodology. A 3(3) Box-Behnken design was used to determine the effect of drug/polymer ratio (X1), surfactant concentration (X2) and stirring speed (X3), on the mean particle size (Y1), percentage encapsulation efficiency (Y2) and cumulative percent drug released for 12 h (Y3). Emulsion solvent evaporation (ESE) technique was applied utilizing Eudragit RS100 as coating polymer and span 80 as surfactant. All formulations were evaluated for micromeritic properties and morphologically characterized by scanning electron microscopy (SEM). The relative bioavailability of the optimized microspheres was compared with CTZ marketed product after oral administration on healthy human volunteers using a double blind, randomized, cross-over design. The results revealed that the mean particle sizes of the microspheres ranged from 62 to 348 µm and the efficiency of entrapment ranged from 36.3% to 70.1%. The optimized CTZ microspheres exhibited a slow and controlled release over 12 h. The pharmacokinetic data of optimized CTZ microspheres showed prolonged tmax, decreased Cmax and AUC0-∞ value of 3309 ± 211 ng h/ml indicating improved relative bioavailability by 169.4% compared with marketed tablets.
Singh, Samipta; Singh, Mahendra; Tripathi, Chandra Bhushan; Arya, Malti; Saraf, Shubhini A
2016-02-01
Athlete's foot is a fungal infection of the foot which causes dry, itchy, flaky condition of the skin caused by Trichophyton species. In this study, the potential of ultra-small nanostructured lipid carrier (usNLC)-based topical gel of miconazole nitrate for the treatment of athlete's foot was evaluated. Nanostructure lipid carriers (NLCs) prepared by melt emulsification and sonication technique were characterized for particle size, drug entrapment, zeta potential and drug release. The optimized usNLC revealed particle size 53.79 nm, entrapment efficiency 86.77%, zeta potential -12.9 mV and polydispersity index (PDI) of 0.27. The drug release studies of usNLC showed initial fast release followed by sustained release with 91.99% drug released in 24 h. Optimized usNLCs were incorporated into carbopol-934 gel and evaluated for pH (6.8), viscosity (36,400 mPa s) and texture analysis. Antifungal activity against Trichophyton mentagrophytes exhibited wider zone of inhibition, 6.6 ± 1.5 mm for optimized usNLC3 gel viz-à-viz marketed gel formulation (3.7 ± 1.2 mm). Hen's egg test-chorioallantoic membrane (HET-CAM) irritation test confirmed optimized usNLC gel to be non-irritant to chorioallantoic membrane. Improved dermal delivery of miconazole by usNLC gel could be achieved for treatment of athlete's foot.
Jafarpoor, Mina; Li, Jia; White, Jacob K; Rutkove, Seward B
2013-05-01
Electrical impedance myography (EIM) is a technique for the evaluation of neuromuscular diseases, including amyotrophic lateral sclerosis and muscular dystrophy. In this study, we evaluated how alterations in the size and conductivity of muscle and thickness of subcutaneous fat impact the EIM data, with the aim of identifying an optimized electrode configuration for EIM measurements. Finite element models were developed for the human upper arm based on anatomic data; material properties of the tissues were obtained from rat and published sources. The developed model matched the frequency-dependent character of the data. Of the three major EIM parameters, resistance, reactance, and phase, the reactance was least susceptible to alterations in the subcutaneous fat thickness, regardless of electrode arrangement. For example, a quadrupling of fat thickness resulted in a 375% increase in resistance at 35 kHz but only a 29% reduction in reactance. By further optimizing the electrode configuration, the change in reactance could be reduced to just 0.25%. For a fixed 30 mm distance between the sense electrodes centered between the excitation electrodes, an 80 mm distance between the excitation electrodes was found to provide the best balance, with a less than 1% change in reactance despite a doubling of subcutaneous fat thickness or halving of muscle size. These analyses describe a basic approach for further electrode configuration optimization for EIM.
NASA Astrophysics Data System (ADS)
Venkatesh, C.; Sundara Moorthy, N.; Venkatesan, R.; Aswinprasad, V.
The moving parts of any mechanism and machine parts are always subjected to a significant wear due to the development of friction. It is an utmost important aspect to address the wear problems in present environment. But the complexity goes on increasing to replace the worn out parts if they are very precise. Technology advancement in surface engineering ensures the minimum surface wear with the introduction of polycrystalline nano nickel coating. The enhanced tribological property of the nano nickel coating was achieved by the development of grain size and hardness of the surface. In this study, it has been decided to focus on the optimized parameters of the pulsed electro deposition to develop such a coating. Taguchi’s method coupled gray relational analysis was employed by considering the pulse frequency, average current density and duty cycle as the chief process parameters. The grain size and hardness were considered as responses. Totally, nine experiments were conducted as per L9 design of experiment. Additionally, response graph method has been applied to determine the most significant parameter to influence both the responses. In order to improve the degree of validation, confirmation test and predicted gray grade were carried out with the optimized parameters. It has been observed that there was significant improvement in gray grade for the optimal parameters.
MUTLI-OBJECTIVE OPTIMIZATION OF MICROSTRUCTURE IN WROUGHT MAGNESIUM ALLOYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radhakrishnan, Balasubramaniam; Gorti, Sarma B; Simunovic, Srdjan
2013-01-01
The microstructural features that govern the mechanical properties of wrought magnesium alloys include grain size, crystallographic texture, and twinning. Several processes based on shear deformation have been developed that promote grain refinement, weakening of the basal texture, as well as the shift of the peak intensity away from the center of the basal pole figure - features that promote room temperature ductility in Mg alloys. At ORNL, we are currently exploring the concept of introducing nano-twins within sub-micron grains as a possible mechanism for simultaneously improving strength and ductility by exploiting a potential dislocation glide along the twin-matrix interface amore » mechanism that was originally proposed for face-centered cubic materials. Specifically, we have developed an integrated modeling and optimization framework in order to identify the combinations of grain size, texture and twin spacing that can maximize strength-ductility combinations. A micromechanical model that relates microstructure to material strength is coupled with a failure model that relates ductility to a critical shear strain and a critical hydrostatic stress. The micro-mechanical model is combined with an optimization tool based on genetic algorithm. A multi-objective optimization technique is used to explore the strength-ductility space in a systematic fashion and identify optimum combinations of the microstructural parameters that will simultaneously maximize the strength-ductility in the alloy.« less
Gilliam, David S.
2018-01-01
Acropora cervicornis is the most widely used coral species for reef restoration in the greater Caribbean. However, outplanting methodologies (e.g., colony density, size, host genotype, and attachment technique) vary greatly, and to date have not been evaluated for optimality across multiple sites. Two experiments were completed during this study, the first evaluated the effects of attachment technique, colony size, and genotype by outplanting 405 A. cervicornis colonies, from ten genotypes, four size classes, and three attachment techniques (epoxy, nail and cable tie, or puck) across three sites. Colony survival, health condition, tissue productivity, and growth were assessed across one year for this experiment. The second experiment assessed the effect of colony density by outplanting colonies in plots of one, four, or 25 corals per 4 m2 across four separate sites. Plot survival and condition were evaluated across two years for this experiment in order to better capture the effect of increasing cover. Colonies attached with a nail and cable tie resulted in the highest survival regardless of colony size. Small corals had the lowest survival, but the greatest productivity. The majority of colony loss was attributed to missing colonies and was highest for pucks and small epoxied colonies. Disease and predation were observed at all sites, but did not affect all genotypes, however due to the overall low prevalence of either condition there were no significant differences found in any comparison. Low density plots had significantly higher survival and significantly lower prevalence of disease, predation, and missing colonies than high density plots. These results indicate that to increase initial outplant success, colonies of many genotypes should be outplanted to multiple sites using a nail and cable tie, in low densities, and with colonies over 15 cm total linear extension. PMID:29507829
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
On the Optimization of Aerospace Plane Ascent Trajectory
NASA Astrophysics Data System (ADS)
Al-Garni, Ahmed; Kassem, Ayman Hamdy
A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.
Instrumentation for studying binder burnout in an immobilized plutonium ceramic wasteform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, M; Pugh, D; Herman, C
The Plutonium Immobilization Program produces a ceramic wasteform that utilizes organic binders. Several techniques and instruments were developed to study binder burnout on full size ceramic samples in a production environment. This approach provides a method for developing process parameters on production scale to optimize throughput, product quality, offgas behavior, and plant emissions. These instruments allow for offgas analysis, large-scale TGA, product quality observation, and thermal modeling. Using these tools, results from lab-scale techniques such as laser dilametry studies and traditional TGA/DTA analysis can be integrated. Often, the sintering step of a ceramification process is the limiting process step thatmore » controls the production throughput. Therefore, optimization of sintering behavior is important for overall process success. Furthermore, the capabilities of this instrumentation allows better understanding of plant emissions of key gases: volatile organic compounds (VOCs), volatile inorganics including some halide compounds, NO{sub x}, SO{sub x}, carbon dioxide, and carbon monoxide.« less
Application of hanging drop technique to optimize human IgG formulations.
Li, Guohua; Kasha, Purna C; Late, Sameer; Banga, Ajay K
2010-01-01
The purpose of this work is to assess the hanging drop technique in screening excipients to develop optimal formulations for human immunoglobulin G (IgG). A microdrop of human IgG and test solution hanging from a cover slide and undergoing vapour diffusion was monitored by a stereomicroscope. Aqueous solutions of IgG in the presence of different pH, salt concentrations and excipients were prepared and characterized. Low concentration of either sodium/potassium phosphate or McIlvaine buffer favoured the solubility of IgG. Addition of sucrose favoured the stability of this antibody while addition of NaCl caused more aggregation. Antimicrobial preservatives were also screened and a complex effect at different buffer conditions was observed. Dynamic light scattering, differential scanning calorimetry and size exclusion chromatography studies were performed to further validate the results. In conclusion, hanging drop is a very easy and effective approach to screen protein formulations in the early stage of formulation development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aswad, Z.A.R.; Al-Hadad, S.M.S.
1983-03-01
The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less
Modeling target normal sheath acceleration using handoffs between multiple simulations
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard
2013-10-01
We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.
Hao, Jifu; Fang, Xinsheng; Zhou, Yanfang; Wang, Jianzhu; Guo, Fengguang; Li, Fei; Peng, Xinsheng
2011-01-01
The purpose of the present study was to optimize a solid lipid nanoparticle (SLN) of chloramphenicol by investigating the relationship between design factors and experimental data using response surface methodology. A Box-Behnken design was constructed using solid lipid (X(1)), surfactant (X(2)), and drug/lipid ratio (X(3)) level as independent factors. SLN was successfully prepared by a modified method of melt-emulsion ultrasonication and low temperature-solidification technique using glyceryl monostearate as the solid lipid, and poloxamer 188 as the surfactant. The dependent variables were entrapment efficiency (EE), drug loading (DL), and turbidity. Properties of SLN such as the morphology, particle size, zeta potential, EE, DL, and drug release behavior were investigated, respectively. As a result, the nanoparticle designed showed nearly spherical particles with a mean particle size of 248 nm. The polydispersity index of particle size was 0.277 ± 0.058 and zeta potential was -8.74 mV. The EE (%) and DL (%) could reach up to 83.29% ± 1.23% and 10.11% ± 2.02%, respectively. In vitro release studies showed a burst release at the initial stage followed by a prolonged release of chloramphenicol from SLN up to 48 hours. The release kinetics of the optimized formulation best fitted the Peppas-Korsmeyer model. These results indicated that the chloramphenicol-loaded SLN could potentially be exploited as a delivery system with improved drug entrapment efficiency and controlled drug release.
Hao, Jifu; Fang, Xinsheng; Zhou, Yanfang; Wang, Jianzhu; Guo, Fengguang; Li, Fei; Peng, Xinsheng
2011-01-01
The purpose of the present study was to optimize a solid lipid nanoparticle (SLN) of chloramphenicol by investigating the relationship between design factors and experimental data using response surface methodology. A Box-Behnken design was constructed using solid lipid (X1), surfactant (X2), and drug/lipid ratio (X3) level as independent factors. SLN was successfully prepared by a modified method of melt-emulsion ultrasonication and low temperature-solidification technique using glyceryl monostearate as the solid lipid, and poloxamer 188 as the surfactant. The dependent variables were entrapment efficiency (EE), drug loading (DL), and turbidity. Properties of SLN such as the morphology, particle size, zeta potential, EE, DL, and drug release behavior were investigated, respectively. As a result, the nanoparticle designed showed nearly spherical particles with a mean particle size of 248 nm. The polydispersity index of particle size was 0.277 ± 0.058 and zeta potential was −8.74 mV. The EE (%) and DL (%) could reach up to 83.29% ± 1.23% and 10.11% ± 2.02%, respectively. In vitro release studies showed a burst release at the initial stage followed by a prolonged release of chloramphenicol from SLN up to 48 hours. The release kinetics of the optimized formulation best fitted the Peppas–Korsmeyer model. These results indicated that the chloramphenicol-loaded SLN could potentially be exploited as a delivery system with improved drug entrapment efficiency and controlled drug release. PMID:21556343
A Model for Designing Adaptive Laboratory Evolution Experiments.
LaCroix, Ryan A; Palsson, Bernhard O; Feist, Adam M
2017-04-15
The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes with available resources (i.e., laboratory supplies, personnel, and time). To design and to better understand ALE experiments, a simulator, ALEsim, was developed, validated, and applied to the optimization of ALE experiments. The effects of various passage sizes were experimentally determined and subsequently evaluated with ALEsim, to explain differences in experimental outcomes. Furthermore, a beneficial mutation rate of 10 -6.9 to 10 -8.4 mutations per cell division was derived. A retrospective analysis of ALE experiments revealed that passage sizes typically employed in serial passage batch culture ALE experiments led to inefficient production and fixation of beneficial mutations. ALEsim and the results described here will aid in the design of ALE experiments to fit the exact needs of a project while taking into account the resources required and will lower the barriers to entry for this experimental technique. IMPORTANCE ALE is a widely used scientific technique to increase scientific understanding, as well as to create industrially relevant organisms. The manner in which ALE experiments are conducted is highly manual and uniform, with little optimization for efficiency. Such inefficiencies result in suboptimal experiments that can take multiple months to complete. With the availability of automation and computer simulations, we can now perform these experiments in an optimized fashion and can design experiments to generate greater fitness in an accelerated time frame, thereby pushing the limits of what adaptive laboratory evolution can achieve. Copyright © 2017 American Society for Microbiology.
Safari, Hanieh; Adili, Reheman; Holinstat, Michael; Eniola-Adefeso, Omolola
2018-05-15
Though the emulsion solvent evaporation (ESE) technique has been previously modified to produce rod-shaped particles, it cannot generate small-sized rods for drug delivery applications due to the inherent coupling and contradicting requirements for the formation versus stretching of droplets. The separation of the droplet formation from the stretching step should enable the creation of submicron droplets that are then stretched in the second stage by manipulation of the system viscosity along with the surface-active molecule and oil-phase solvent. A two-step ESE protocol is evaluated where oil droplets are formed at low viscosity followed by a step increase in the aqueous phase viscosity to stretch droplets. Different surface-active molecules and oil phase solvents were evaluated to optimize the yield of biodegradable PLGA rods. Rods were assessed for drug loading via an imaging agent and vascular-targeted delivery application via blood flow adhesion assays. The two-step ESE method generated PLGA rods with major and minor axis down to 3.2 µm and 700 nm, respectively. Chloroform and sodium metaphosphate was the optimal solvent and surface-active molecule, respectively, for submicron rod fabrication. Rods demonstrated faster release of Nile Red compared to spheres and successfully targeted an inflamed endothelium under shear flow in vitro and in vivo. Copyright © 2018 Elsevier Inc. All rights reserved.
2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation
NASA Astrophysics Data System (ADS)
Proctor, Camron Lisle
The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.
Ofei, K T; Holst, M; Rasmussen, H H; Mikkelsen, B E
2015-08-01
The trolley meal system allows hospital patients to select food items and portion sizes directly from the food trolley. The nutritional status of the patient may be compromised if portions selected do not meet recommended intakes for energy, protein and micronutrients. The aim of this study was to investigate: (1) the portion size served, consumed and plate waste generated, (2) the extent to which the size of meal portions served contributes to daily recommended intakes for energy and protein, (3) the predictive effect of the served portion sizes on plate waste in patients screened for nutritional risk by NRS-2002, and (4) to establish the applicability of the dietary intake monitoring system (DIMS) as a technique to monitor plate waste. A prospective observational cohort study was conducted in two hospital wards over five weekdays. The DIMS was used to collect paired before- and after-meal consumption photos and measure the weight of plate content. The proportion of energy and protein consumed by both groups at each meal session could contribute up to 15% of the total daily recommended intake. Linear mixed model identified a positive relationship between meal portion size and plate waste (P = 0.002) and increased food waste in patients at nutritional risk during supper (P = 0.001). Meal portion size was associated with the level of plate waste produced. Being at nutritional risk further increased the extent of waste, regardless of the portion size served at supper. The use of DIMS as an innovative technique might be a promising way to monitor plate waste for optimizing meal portion size servings and minimizing food waste. Copyright © 2015 Elsevier Ltd. All rights reserved.
Realm of Thermoalkaline Lipases in Bioprocess Commodities
2018-01-01
For decades, microbial lipases are notably used as biocatalysts and efficiently catalyze various processes in many important industries. Biocatalysts are less corrosive to industrial equipment and due to their substrate specificity and regioselectivity they produced less harmful waste which promotes environmental sustainability. At present, thermostable and alkaline tolerant lipases have gained enormous interest as biocatalyst due to their stability and robustness under high temperature and alkaline environment operation. Several characteristics of the thermostable and alkaline tolerant lipases are discussed. Their molecular weight and resistance towards a range of temperature, pH, metal, and surfactants are compared. Their industrial applications in biodiesel, biodetergents, biodegreasing, and other types of bioconversions are also described. This review also discusses the advance of fermentation process for thermostable and alkaline tolerant lipases production focusing on the process development in microorganism selection and strain improvement, culture medium optimization via several optimization techniques (i.e., one-factor-at-a-time, surface response methodology, and artificial neural network), and other fermentation parameters (i.e., inoculums size, temperature, pH, agitation rate, dissolved oxygen tension (DOT), and aeration rate). Two common fermentation techniques for thermostable and alkaline tolerant lipases production which are solid-state and submerged fermentation methods are compared and discussed. Recent optimization approaches using evolutionary algorithms (i.e., Genetic Algorithm, Differential Evolution, and Particle Swarm Optimization) are also highlighted in this article. PMID:29666707
Double emulsion solvent evaporation techniques used for drug encapsulation.
Iqbal, Muhammad; Zafar, Nadiah; Fessi, Hatem; Elaissari, Abdelhamid
2015-12-30
Double emulsions are complex systems, also called "emulsions of emulsions", in which the droplets of the dispersed phase contain one or more types of smaller dispersed droplets themselves. Double emulsions have the potential for encapsulation of both hydrophobic as well as hydrophilic drugs, cosmetics, foods and other high value products. Techniques based on double emulsions are commonly used for the encapsulation of hydrophilic molecules, which suffer from low encapsulation efficiency because of rapid drug partitioning into the external aqueous phase when using single emulsions. The main issue when using double emulsions is their production in a well-controlled manner, with homogeneous droplet size by optimizing different process variables. In this review special attention has been paid to the application of double emulsion techniques for the encapsulation of various hydrophilic and hydrophobic anticancer drugs, anti-inflammatory drugs, antibiotic drugs, proteins and amino acids and their applications in theranostics. Moreover, the optimized ratio of the different phases and other process parameters of double emulsions are discussed. Finally, the results published regarding various types of solvents, stabilizers and polymers used for the encapsulation of several active substances via double emulsion processes are reported. Copyright © 2015 Elsevier B.V. All rights reserved.
Bradshaw, Peter L.; Colville, Jonathan F.; Linder, H. Peter
2015-01-01
We used a very large dataset (>40% of all species) from the endemic-rich Cape Floristic Region (CFR) to explore the impact of different weighting techniques, coefficients to calculate similarity among the cells, and clustering approaches on biogeographical regionalisation. The results were used to revise the biogeographical subdivision of the CFR. We show that weighted data (down-weighting widespread species), similarity calculated using Kulczinsky’s second measure, and clustering using UPGMA resulted in the optimal classification. This maximized the number of endemic species, the number of centres recognized, and operational geographic units assigned to centres of endemism (CoEs). We developed a dendrogram branch order cut-off (BOC) method to locate the optimal cut-off points on the dendrogram to define candidate clusters. Kulczinsky’s second measure dendrograms were combined using consensus, identifying areas of conflict which could be due to biotic element overlap or transitional areas. Post-clustering GIS manipulation substantially enhanced the endemic composition and geographic size of candidate CoEs. Although there was broad spatial congruence with previous phytogeographic studies, our techniques allowed for the recovery of additional phytogeographic detail not previously described for the CFR. PMID:26147438
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Extinction-sedimentation inversion technique for measuring size distribution of artificial fogs
NASA Technical Reports Server (NTRS)
Deepak, A.; Vaughan, O. H.
1978-01-01
In measuring the size distribution of artificial fog particles, it is important that the natural state of the particles not be disturbed by the measuring device, such as occurs when samples are drawn through tubes. This paper describes a method for carrying out such a measurement by allowing the fog particles to settle in quiet air inside an enclosure through which traverses a parallel beam of light for measuring the optical depth as a function of time. An analytic function fit to the optical depth time decay curve can be directly inverted to yield the size distribution. Results of one such experiment performed on artificial fogs are shown as an example. The forwardscattering corrections to the measured extinction coefficient are also discussed with the aim of optimizing the experimental design so that the error due to forwardscattering is minimized.
Water supply pipe dimensioning using hydraulic power dissipation
NASA Astrophysics Data System (ADS)
Sreemathy, J. R.; Rashmi, G.; Suribabu, C. R.
2017-07-01
Proper sizing of the pipe component of water distribution networks play an important role in the overall design of the any water supply system. Several approaches have been applied for the design of networks from an economical point of view. Traditional optimization techniques and population based stochastic algorithms are widely used to optimize the networks. But the use of these approaches is mostly found to be limited to the research level due to difficulties in understanding by the practicing engineers, design engineers and consulting firms. More over due to non-availability of commercial software related to the optimal design of water distribution system,it forces the practicing engineers to adopt either trial and error or experience-based design. This paper presents a simple approach based on power dissipation in each pipeline as a parameter to design the network economically, but not to the level of global minimum cost.
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
Optimizing oil spill cleanup efforts: A tactical approach and evaluation framework.
Grubesic, Tony H; Wei, Ran; Nelson, Jake
2017-12-15
Although anthropogenic oil spills vary in size, duration and severity, their broad impacts on complex social, economic and ecological systems can be significant. Questions pertaining to the operational challenges associated with the tactical allocation of human resources, cleanup equipment and supplies to areas impacted by a large spill are particularly salient when developing mitigation strategies for extreme oiling events. The purpose of this paper is to illustrate the application of advanced oil spill modeling techniques in combination with a developed mathematical model to spatially optimize the allocation of response crews and equipment for cleaning up an offshore oil spill. The results suggest that the detailed simulations and optimization model are a good first step in allowing both communities and emergency responders to proactively plan for extreme oiling events and develop response strategies that minimize the impacts of spills. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement
Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan
2012-01-01
In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald
2018-06-04
A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.
The application of phase grating to CLM technology for the sub-65nm node optical lithography
NASA Astrophysics Data System (ADS)
Yoon, Gi-Sung; Kim, Sung-Hyuck; Park, Ji-Soong; Choi, Sun-Young; Jeon, Chan-Uk; Shin, In-Kyun; Choi, Sung-Woon; Han, Woo-Sung
2005-06-01
As a promising technology for sub-65nm node optical lithography, CLM(Chrome-Less Mask) technology among RETs(Resolution Enhancement Techniques) for low k1 has been researched worldwide in recent years. CLM has several advantages, such as relatively simple manufacturing process and competitive performance compared to phase-edge PSM's. For the low-k1 lithography, we have researched CLM technique as a good solution especially for sub-65nm node. As a step for developing the sub-65nm node optical lithography, we have applied CLM technology in 80nm-node lithography with mesa and trench method. From the analysis of the CLM technology in the 80nm lithography, we found that there is the optimal shutter size for best performance in the technique, the increment of wafer ADI CD varied with pattern's pitch, and a limitation in patterning various shapes and size by OPC dead-zone - OPC dead-zone in CLM technique is the specific region of shutter size that dose not make the wafer CD increased more than a specific size. And also small patterns are easily broken, while fabricating the CLM mask in mesa method. Generally, trench method has better optical performance than mesa. These issues have so far restricted the application of CLM technology to a small field. We approached these issues with 3-D topographic simulation tool and found that the issues could be overcome by applying phase grating in trench-type CLM. With the simulation data, we made some test masks which had many kinds of patterns with many different conditions and analyzed their performance through AIMS fab 193 and exposure on wafer. Finally, we have developed the CLM technology which is free of OPC dead-zone and pattern broken in fabrication process. Therefore, we can apply the CLM technique into sub-65nm node optical lithography including logic devices.
Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements
NASA Technical Reports Server (NTRS)
Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.
2016-01-01
The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under-matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a constraint imposed upon the others, thereby precluding the achievement of their trivial solutions. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. The resulting pattern, along with randomly generated patterns, is subjected to numerical deformations and analyzed with DIC software. The optimal pattern is shown to outperform randomly generated patterns.
NASA Astrophysics Data System (ADS)
Winnett, James; Mallick, Kajal K.
2014-04-01
Commercially pure titanium (Ti) and its alloys, in particular, titanium-vanadium-aluminium (Ti-6Al-4V), have been used as biomaterials due to their mechanical similarities to bone, good biocompatibility, and inertness in vivo. The introduction of porosity to the scaffolds leads to optimized mechanical properties and enhanced biological activity. The adaptive foam reticulation (AFR) technique has been previously used to generate hydroxyapatite bioscaffolds with enhanced cell behavior due to the generation of macroporous structures with microporous struts that provided routes for cell infiltration as well as attachment sites. Sacrificial polyurethane templates of 45 ppi and 90 ppi were coated in biomaterial-based slurries containing either Ti or Ti-6Al-4V as the biomaterial and camphene as the porogen. The resultant macropore sizes of 100-550 μm corresponded well with the initial template pore sizes while camphene produced micropores of 1-10 μm, with the level of microporosity related to the amount of porogen inclusion.
Imaging TiO2 nanoparticles on GaN nanowires with electrostatic force microscopy
NASA Astrophysics Data System (ADS)
Xie, Ting; Wen, Baomei; Liu, Guannan; Guo, Shiqi; Motayed, Abhishek; Murphy, Thomas; Gomez, R. D.
Gallium nitride (GaN) nanowires that are functionalized with metal-oxides nanoparticles have been explored extensively for gas sensing applications in the past few years. These sensors have several advantages over conventional schemes, including miniature size, low-power consumption and fast response and recovery times. The morphology of the oxide functionalization layer is critical to achieve faster response and recovery times, with the optimal size distribution of nanoparticles being in the range of 10 to 30 nm. However, it is challenging to characterize these nanoparticles on GaN nanowires using common techniques such as scanning electron microscopy, transmission electron microscopy, and x-ray diffraction. Here, we demonstrate electrostatic force microscopy in combination with atomic force microscopy as a non-destructive technique for morphological characterization of the dispersed TiO2 nanoparticles on GaN nanowires. We also discuss the applicability of this method to other material systems with a proposed tip-surface capacitor model. This project was sponsored through N5 Sensors and the Maryland Industrial Partnerships (MIPS, #5418).
Toward Imaging of Small Objects with XUV Radiation
NASA Astrophysics Data System (ADS)
Sayrac, Muhammed; Kolomenski, Alexandre A.; Boran, Yakup; Schuessler, Hans
The coherent diffraction imaging (CDI) technique has the potential to capture high resolution images of nano- or micron-sized structures when using XUV radiation obtained by high harmonic radiation (HHG) process. When a small object is exposed to XUV radiation, a diffraction pattern of the object is created. The advances in the coherent HHG enable obtaining photon flux sufficient for XUV imaging. The diffractive imaging technique from coherent table top XUV beams have made possible nanometer-scale resolution imaging by replacing the imaging optics with a computer reconstruction algorithm. In this study, we present our initial work on diffractive imaging using a tabletop XUV source. The initial investigation of imaging of a micron-sized mesh with an optimized HHG source is demonstrated. This work was supported in part by the Robert A. Welch Foundation Grant No. A1546 and the Qatar Foundation under the grant NPRP 8-735-1-154. M. Sayrac acknowledges support from the Ministry of National Education of the Republic of Turkey.
Effects of Solution Chemistry on Nano-Bubbles Transport in Saturated Porous Media
NASA Astrophysics Data System (ADS)
Hamamoto, S.; Takemura, T.; Suzuki, K.; Nihei, N.; Nishimura, T.
2017-12-01
Nano-bubbles (NBs) have a considerable potential for the remediation of soil and groundwater contaminated by organic compounds, especially when used in conjunction with bioremediation technologies. Understanding the transport mechanisms of NBs in soils is essential to optimize NB-based remediation techniques. In this study, one-dimensional column transport experiments using glass beads with 0.1 mm size were conducted, where NBs created by oxygen gas at different pH and ionic strength were injected to the column at the constant flow rate. The NBs concentration in the effluent was quantified using a resonant mass measurement technique. Effects of solution chemistry of the NBs water on NB transport in the porous media were investigated. The results showed that attachment of NBs was enhanced under higher ionic strength and lower pH conditions, caused by the reduced repulsive force between NBs and glass beads. In addition, bubble size distributions in the effluents showed that relatively larger NBs were retained in the column. This trend was more significant at lower pH condition.
Dangre, Pankaj; Gilhotra, Ritu; Dhole, Shashikant
2016-10-01
The present investigation is aimed to design a statistically optimized self-microemulsifying drug delivery system (SMEDDS) of eprosartan mesylate (EM). Preliminary screening was carried out to find a suitable combination of various excipients for the formulation. A 3(2) full factorial design was employed to determine the effect of various independent variables on dependent (response) variables. The independent variables studied in the present work were concentration of oil (X 1) and the ratio of S mix (X 2), whereas the dependent variables were emulsification time (s), globule size (nm), polydispersity index (pdi), and zeta potential (mV), and the multiple linear regression analysis (MLRA) was employed to understand the influence of independent variables on dependent variables. Furthermore, a numerical optimization technique using the desirability function was used to develop a new optimized formulation with desired values of dependent variables. The optimized SMEDDS formulation of eprosartan mesylate (EMF-O) by the above method exhibited emulsification time, 118.45 ± 1.64 s; globule size, 196.81 ± 1.29 nm; zeta potential, -9.34 ± 1.2 mV, and polydispersity index, 0.354 ± 0.02. For the in vitro dissolution study, the optimized formulation (EMF-O) and pure drug were separately entrapped in the dialysis bag, and the study indicated higher release of the drug from EMF-O. In vivo pharmacokinetic studies in Wistar rats using PK solver software revealed 2.1-fold increment in oral bioavailability of EM from EMF-O, when compared with plain suspension of pure drug.
Wines, Michael P; Johnson, Valerie M; Lock, Brad; Antonio, Fred; Godwin, James C; Rush, Elizabeth M; Guyer, Craig
2015-01-01
Optimal husbandry techniques are desirable for any headstart program, but frequently are unknown for rare species. Here we describe key reproductive variables and determine optimal incubation temperature and diet diversity for Eastern Indigo Snakes (Drymarchon couperi) grown in laboratory settings. Optimal incubation temperature was estimated from two variables dependent on temperature, shell dimpling, a surrogate for death from fungal infection, and deviation of an egg from an ovoid shape, a surrogate for death from developmental anomalies. Based on these relationships and size at hatching we determined optimal incubation temperature to be 26°C. Additionally, we used incubation data to assess the effect of temperature on duration of incubation and size of hatchlings. We also examined hatchling diets necessary to achieve optimal growth over a 21-month period. These snakes exhibited a positive linear relationship between total mass eaten and growth rate, when individuals were fed less than 1711 g of prey, and displayed constant growth for individuals exceeding 1711 g of prey. Similarly, growth rate increased linearly with increasing diet diversity up to a moderately diverse diet, followed by constant growth for higher levels of diet diversity. Of the two components of diet diversity, diet evenness played a stronger role than diet richness in explaining variance in hatchling growth. These patterns document that our goal of satiating snakes was achieved for some individuals but not others and that diets in which total grams consumed over the first 21 months of life is distributed equivalently among at least three prey genera yielded the fastest growth rates for hatchling snakes. © 2015 Wiley Periodicals, Inc.
Classification and treatment of periprosthetic supracondylar femur fractures.
Ricci, William
2013-02-01
Locked plating and retrograde nailing are two accepted methods for treatment of periprosthetic distal femur fractures. Each has relative benefits and potential pitfalls. Appropriate patient selection and knowledge of the specific femoral component geometry are required to optimally choose between these two methods. Locked plating may be applied to most periprosthetic distal femur fractures. The fracture pattern, simple or comminuted, will dictate the specific plating technique, compression plating or bridge plating. Nailing requires an open intercondylar box and a distal fragment of enough size to allow interlocking. With proper patient selection and proper techniques, good results can be obtained with either method. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Lalonde, Michel; Wells, R Glenn; Birnie, David; Ruddy, Terrence D; Wassenaar, Richard
2014-07-01
Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. About 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, Michel, E-mail: mlalonde15@rogers.com; Wassenaar, Richard; Wells, R. Glenn
2014-07-15
Purpose: Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. Methods: Aboutmore » 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Results: Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). Conclusions: A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.« less
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
OPC for curved designs in application to photonics on silicon
NASA Astrophysics Data System (ADS)
Orlando, Bastien; Farys, Vincent; Schneider, Loïc.; Cremer, Sébastien; Postnikov, Sergei V.; Millequant, Matthieu; Dirrenberger, Mathieu; Tiphine, Charles; Bayle, Sébastian; Tranquillin, Céline; Schiavone, Patrick
2016-03-01
Today's design for photonics devices on silicon relies on non-Manhattan features such as curves and a wide variety of angles with minimum feature size below 100nm. Industrial manufacturing of such devices requires optimized process window with 193nm lithography. Therefore, Resolution Enhancement Techniques (RET) that are commonly used for CMOS manufacturing are required. However, most RET algorithms are based on Manhattan fragmentation (0°, 45° and 90°) which can generate large CD dispersion on masks for photonic designs. Industrial implementation of RET solutions to photonic designs is challenging as most currently available OPC tools are CMOS-oriented. Discrepancy from design to final results induced by RET techniques can lead to lower photonic device performance. We propose a novel sizing algorithm allowing adjustment of design edge fragments while preserving the topology of the original structures. The results of the algorithm implementation in the rule based sizing, SRAF placement and model based correction will be discussed in this paper. Corrections based on this novel algorithm were applied and characterized on real photonics devices. The obtained results demonstrate the validity of the proposed correction method integrated in Inscale software of Aselta Nanographics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depauw, N; Patel, S; MacDonald, S
Purpose: Deep inspiration breath-hold techniques (DIBH) have been shown to carry significant dosimetric advantages in conventional radiotherapy of left-sided breast cancer. The purpose of this study is to evaluate the use of DIBH techniques for post-mastectomy radiation therapy (PMRT) using proton pencil beam scanning (PBS). Method: Ten PMRT patients, with or without breast implant, underwent two helical CT scans: one with free breathing and the other with deep inspiration breath-hold. A prescription of 50.4 Gy(RBE) to the whole chest wall and lymphatics (axillary, supraclavicular, and intramammary nodes) was considered. PBS plans were generated for each patient’s CT scan using Astroid,more » an in-house treatment planning system, with the institution conventional clinical PMRT parameters; that is, using a single en-face field with a spot size varying from 8 mm to 14 mm as a function of energy. Similar optimization parameters were used in both plans in order to ensure appropriate comparison. Results: Regardless of the technique (free breathing or DIBH), the generated plans were well within clinical acceptability. DIBH allowed for higher target coverage with better sparing of the cardiac structures. The lung doses were also slightly improved. While the use of DIBH techniques might be of interest, it is technically challenging as it would require a fast PBS delivery, as well as the synchronization of the beam delivery with a gating system, both of which are not currently available at the institution. Conclusion: DIBH techniques display some dosimetric advantages over free breathing treatment for PBS PMRT patients, which warrants further investigation. Plans will also be generated with smaller spot sizes (2.5 mm to 5.5 mm and 5 mm to 9 mm), corresponding to new generation machines, in order to further quantify the dosimetric advantages of DIBH as a function of spot size.« less
Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.
2016-01-01
The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463
Arce, Pedro; Lagares, Juan Ignacio
2018-01-25
We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2 × 2 cm 2 to 40 × 40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.
NASA Astrophysics Data System (ADS)
Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min
2018-04-01
The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.
Anarjan, Navideh; Jafarizadeh-Malmiri, Hoda; Nehdi, Imededdine Arbi; Sbihi, Hassen Mohamed; Al-Resayes, Saud Ibrahim; Tan, Chin Ping
2015-01-01
Nanodispersion systems allow incorporation of lipophilic bioactives, such as astaxanthin (a fat soluble carotenoid) into aqueous systems, which can improve their solubility, bioavailability, and stability, and widen their uses in water-based pharmaceutical and food products. In this study, response surface methodology was used to investigate the influences of homogenization time (0.5–20 minutes) and speed (1,000–9,000 rpm) in the formation of astaxanthin nanodispersions via the solvent-diffusion process. The product was characterized for particle size and astaxanthin concentration using laser diffraction particle size analysis and high performance liquid chromatography, respectively. Relatively high determination coefficients (ranging from 0.896 to 0.969) were obtained for all suggested polynomial regression models. The overall optimal homogenization conditions were determined by multiple response optimization analysis to be 6,000 rpm for 7 minutes. In vitro cellular uptake of astaxanthin from the suggested individual and multiple optimized astaxanthin nanodispersions was also evaluated. The cellular uptake of astaxanthin was found to be considerably increased (by more than five times) as it became incorporated into optimum nanodispersion systems. The lack of a significant difference between predicted and experimental values confirms the suitability of the regression equations connecting the response variables studied to the independent parameters. PMID:25709435
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L
2014-01-01
The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an "all-comers" basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Treatment with OA reduced pre-procedural stenosis from an average of 88-35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. Copyright © 2013 The Authors. Wiley Periodicals, Inc.
Gangurde, Avinash Bhaskar; Sav, Ajay Kumar; Javeer, Sharadchandra Dagadu; Moravkar, Kailas K; Pawar, Jaywant N; Amin, Purnima D
2015-01-01
Choline bitartrate (CBT) is a vital nutrient for fetal brain development and memory function. It is hygroscopic in nature which is associated with stability related problem during storage such as development of fishy odor and discoloration. Microencapsulation method was adopted to resolve the stability problem and for this hydrogenated soya bean oil (HSO) was used as encapsulating agent. Industrially feasible modified extrusion-spheronization technique was selected for microencapsulation. HSO was used as encapsulating agent, hydroxypropyl methyl cellulose E5/E15 as binder and microcrystalline cellulose as spheronization aid. Formulated pellets were evaluated for parameters such as flow property, morphological characteristics, hardness-friability index (HFI), drug content, encapsulation efficiency, and in vitro drug release. The optimized formulations were also characterized for particle size (by laser diffractometry), differential scanning calorimetry, powder X-ray diffractometry (PXRD), Fourier transform infrared spectroscopy, and scanning electron microscopy. The results from the study showed that coating of 90% and 60% CBT was successful with respect to all desired evaluation parameters. Optimized formulation was kept for 6 months stability study as per ICH guidelines, and there was no change in color, moisture content, drug content, and no fishy odor was observed. Microencapsulated pellets of CBT using HSO as encapsulating agent were developed using modified extrusion spheronization technique. Optimized formulations, CBT 90% (F5), and CBT 60% (F10), were found to be stable for 4M and 6M, respectively, at accelerated conditions.
A compact inflow control device for simulating flight fan noise
NASA Technical Reports Server (NTRS)
Homyak, L.; Mcardle, J. G.; Heidelberg, L. J.
1983-01-01
Inflow control device (ICD's) of various shapes and sizes have been used to simulate inflight fan tone noise during ground static tests. A small, simple inexpensive ICD design was optimized from previous design and fabrication techniques. This compact two-fan-diameter ICD exhibits satisfactory acoustic performance characteristics without causing noise attenuation or redirection. In addition, it generates no important new noise sources. Design and construction details of the compact ICD are discussed and acoustic performance test results are presented.
From 1D to 3D: Tunable Sub-10 nm Gaps in Large Area Devices.
Zhou, Ziwei; Zhao, Zhiyuan; Yu, Ye; Ai, Bin; Möhwald, Helmuth; Chiechi, Ryan C; Yang, Joel K W; Zhang, Gang
2016-04-20
Tunable sub-10 nm 1D nanogaps are fabricated based on nanoskiving. The electric field in different sized nanogaps is investigated theoretically and experimentally, yielding nonmonotonic dependence and an optimized gap-width (5 nm). 2D nanogap arrays are fabricated to pack denser gaps combining surface patterning techniques. Innovatively, 3D multistory nanogaps are built via a stacking procedure, processing higher integration, and much improved electric field. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yen, T. W.; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw
2015-02-28
In this work, we present modifications to the well-known basin hopping (BH) optimization algorithm [D. J. Wales and J. P. Doye, J. Phys. Chem. A 101, 5111 (1997)] by incorporating in it the unique and specific nature of interactions among valence electrons and ions in carbon atoms through calculating the cluster’s total energy by the density functional tight-binding (DFTB) theory, using it to find the lowest energy structures of carbon clusters and, from these optimized atomic and electronic structures, studying their varied forms of topological transitions, which include a linear chain, a monocyclic to a polycyclic ring, and a fullerene/cage-likemore » geometry. In this modified BH (MBH) algorithm, we define a spatial volume within which the cluster’s lowest energy structure is to be searched, and introduce in addition a cut-and-splice genetic operator to increase the searching performance of the energy minimum than the original BH technique. The present MBH/DFTB algorithm is, therefore, characteristically distinguishable from the original BH technique commonly applied to nonmetallic and metallic clusters, technically more thorough and natural in describing the intricate couplings between valence electrons and ions in a carbon cluster, and thus theoretically sound in putting these two charged components on an equal footing. The proposed modified minimization algorithm should be more appropriate, accurate, and precise in the description of a carbon cluster. We evaluate the present algorithm, its energy-minimum searching in particular, by its optimization robustness. Specifically, we first check the MBH/DFTB technique for two representative carbon clusters of larger size, i.e., C{sub 60} and C{sub 72} against the popular cut-and-splice approach [D. M. Deaven and K. M. Ho, Phys. Rev. Lett. 75, 288 (1995)] that normally is combined with the genetic algorithm method for finding the cluster’s energy minimum, before employing it to investigate carbon clusters in the size range C{sub 3}-C{sub 24} studying their topological transitions. An effort was also made to compare our MBH/DFTB and its re-optimized results carried out by full density functional theory (DFT) calculations with some early DFT-based studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wawrzynczyk, Dominika; Szeremeta, Janusz; Samoc, Marek
Spectrally resolved nonlinear optical properties of colloidal InP@ZnS core-shell quantum dots of various sizes were investigated with the Z-scan technique and two-photon fluorescence excitation method using a femtosecond laser system tunable in the range from 750 nm to 1600 nm. In principle, both techniques should provide comparable results and can be interchangeably used for determination of the nonlinear optical absorption parameters, finding maximal values of the cross sections and optimizing them. We have observed slight differences between the two-photon absorption cross sections measured by the two techniques and attributed them to the presence of non-radiative paths of absorption or relaxation.more » The most significant value of two-photon absorption cross section σ{sub 2} for 4.3 nm size InP@ZnS quantum dot was equal to 2200 GM, while the two-photon excitation action cross section σ{sub 2}Φ was found to be 682 GM at 880 nm. The properties of these cadmium-free colloidal quantum dots can be potentially useful for nonlinear bioimaging.« less
NASA Astrophysics Data System (ADS)
Sudhakar, P.; Sheela, K. Anitha; Ramakrishna Rao, D.; Malladi, Satyanarayana
2016-05-01
In recent years weather modification activities are being pursued in many countries through cloud seeding techniques to facilitate the increased and timely precipitation from the clouds. In order to induce and accelerate the precipitation process clouds are artificially seeded with suitable materials like silver iodide, sodium chloride or other hygroscopic materials. The success of cloud seeding can be predicted with confidence if the precipitation process involving aerosol, the ice water balance, water vapor content and size of the seeding material in relation to aerosol in the cloud is monitored in real time and optimized. A project on the enhancement of rain fall through cloud seeding is being implemented jointly with Kerala State Electricity Board Ltd. Trivandrum, Kerala, India at the catchment areas of the reservoir of one of the Hydro electric projects. The dual polarization lidar is being used to monitor and measure the microphysical properties, the extinction coefficient, size distribution and related parameters of the clouds. The lidar makes use of the Mie, Rayleigh and Raman scattering techniques for the various measurement proposed. The measurements with the dual polarization lidar as above are being carried out in real time to obtain the various parameters during cloud seeding operations. In this paper we present the details of the multi-wavelength dual polarization lidar being used and the methodology to monitor the various cloud parameters involved in the precipitation process. The necessary retrieval algorithms for deriving the microphysical properties of clouds, aerosols characteristics and water vapor profiles are incorporated as a software package working under Lab-view for online and off line analysis. Details on the simulation studies and the theoretical model developed in this regard for the optimization of various parameters are discussed.
NASA Astrophysics Data System (ADS)
Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki
2018-05-01
We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.
An Energy-Efficient Mobile Sink-Based Unequal Clustering Mechanism for WSNs.
Gharaei, Niayesh; Abu Bakar, Kamalrulnizam; Mohd Hashim, Siti Zaiton; Hosseingholi Pourasl, Ali; Siraj, Mohammad; Darwish, Tasneem
2017-08-11
Network lifetime and energy efficiency are crucial performance metrics used to evaluate wireless sensor networks (WSNs). Decreasing and balancing the energy consumption of nodes can be employed to increase network lifetime. In cluster-based WSNs, one objective of applying clustering is to decrease the energy consumption of the network. In fact, the clustering technique will be considered effective if the energy consumed by sensor nodes decreases after applying clustering, however, this aim will not be achieved if the cluster size is not properly chosen. Therefore, in this paper, the energy consumption of nodes, before clustering, is considered to determine the optimal cluster size. A two-stage Genetic Algorithm (GA) is employed to determine the optimal interval of cluster size and derive the exact value from the interval. Furthermore, the energy hole is an inherent problem which leads to a remarkable decrease in the network's lifespan. This problem stems from the asynchronous energy depletion of nodes located in different layers of the network. For this reason, we propose Circular Motion of Mobile-Sink with Varied Velocity Algorithm (CM2SV2) to balance the energy consumption ratio of cluster heads (CH). According to the results, these strategies could largely increase the network's lifetime by decreasing the energy consumption of sensors and balancing the energy consumption among CHs.
Controlled and tunable polymer particles' production using a single microfluidic device
NASA Astrophysics Data System (ADS)
Amoyav, Benzion; Benny, Ofra
2018-04-01
Microfluidics technology offers a new platform to control liquids under flow in small volumes. The advantage of using small-scale reactions for droplet generation along with the capacity to control the preparation parameters, making microfluidic chips an attractive technology for optimizing encapsulation formulations. However, one of the drawback in this methodology is the ability to obtain a wide range of droplet sizes, from sub-micron to microns using a single chip design. In fact, typically, droplet chips are used for micron-dimension particles, while nanoparticles' synthesis requires complex chips design (i.e., microreactors and staggered herringbone micromixer). Here, we introduce the development of a highly tunable and controlled encapsulation technique, using two polymer compositions, for generating particles ranging from microns to nano-size using the same simple single microfluidic chip design. Poly(lactic-co-glycolic acid) (PLGA 50:50) or PLGA/polyethylene glycol polymeric particles were prepared with focused-flow chip, yielding monodisperse particle batches. We show that by varying flow rate, solvent, surfactant and polymer composition, we were able to optimize particles' size and decrease polydispersity index, using simple chip designs with no further related adjustments or costs. Utilizing this platform, which offers tight tuning of particle properties, could offer an important tool for formulation development and can potentially pave the way towards a better precision nanomedicine.
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien
2016-11-01
To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Zhu, ZhengXi
Nanoparticles loaded with hydrophobic components (e.g., active pharmaceutical ingredients, medical diagnostic agents, nutritional or personal care chemicals, catalysts, dyes/pigments, and substances with exceptional magnetic/optical/electronic/thermal properties) have tremendous industrial applications. The common desire is to efficiently generate nanoparticles with a desired size, size distribution, and size stability. Recently, Flash NanoPrecipition (FNP) technique with a fast, continuous, and easily scalable process has been developed to efficiently generate hydrophobe-loaded nanoparticles. This dissertation extended this technique, optimized process conditions and material formulations, and gave new insights into the mechanism and kinetics of nanoparticle formation. This dissertation demonstrated successful generation of spherical beta-carotene nanoparticles with an average diameter of 50--100 nm (90 wt% nanoparticles below 200 nm), good size stability (maintained an average diameter below 200 nm for at least one week in saline), and much higher loading (80--90 wt%) than traditional carriers, such as micelles and polymersomes (typically <20 wt%). Moreover, the nanoparticles are amorphous and expected to have a high dissolution rate and bioavailability. To give insights into the mechanism and kinetics of nanoparticle formation, much remarkable evidence supported the kinetically frozen structures of the nanoparticles rather than the thermodynamic equilibrium micelles. Time scales of the particle formation via FNP were proposed. To optimize the material formulations, either polyelectrolytes (i.e., epsilon-polylysine, branched and linear poly(ethylene imine), and chitosan) or amphiphilic diblock copolymers (i.e., polystyrene-b-poly(ethylene glycol) (PS-b-PEG), polycarprolactone-b-poly(ethylene glycol) (PCL-b-PEG), poly(lactic acid)-b-poly(ethylene glycol) (PLA-b-PEG), and poly(lactic-co-glycolic acid)-b-poly(ethylene glycol) (PLGA-b-PEG)) were selectively screened to study the nanoparticle size, distribution, and stability. The effect of the molecular weight of the polymers and pH were also studied. Chitosan and PLGA-b-PEG best stabilized the beta-carotene nanoparticles. Solubility of the hydrophobic drug solute in the aqueous mixture was considered to dominate the nanoparticle stability (i.e., size and morphology) in terms of Ostwald ripening and recrystallization. The lower solubility the drug is of, the greater stability the nanoparticles have. Chemically bonding drug compounds with cleavable hydrophobic moieties to form prodrugs were used to enhance their hydrophobicity and thus the nanoparticle stability. It opened a generic strategy to enhance the stability of nanoparticles formed via FNP. beta-carotene, paclitaxel, paclitaxel prodrug, betulin, hydrocortisone, and hydrocortisone prodrug as the drugs were studied. Solubility parameter (delta), and octanol/water partition coefficients (LogP), provide hydrophobicity indicators for the compounds. LogP showed a good correlation with the nanoparticle stability. An empirical rule was built to conveniently predict particle stability for randomly selected drugs. To optimize the process conditions, two-stream confined impinging jet mixer (CIJ) and four-stream confined vortex jet mixer were used. The particle size was studied by varying drug and polymer concentrations, and flow rate (corresponding to Reynolds number (Re)). To extend the FNP technique, this dissertation demonstrated successful creation of stabilized nanoparticles by integrating an in-situ reactive coupling of a hydrophilic polymer block with a hydrophobic one with FNP. The kinetics of the fast coupling reaction was studied. This dissertation also introduced polyelectrolytes (i.e., epsilon-polylysine, poly(ethylene imine), and chitosan) into FNP to electrosterically stabilize nanoparticles.
Afify, Enas A. M. R.; Elsayed, Ibrahim; Gad, Mary K.; Mohamed, Magdy I.
2018-01-01
Dorzolamide hydrochloride is frequently administered for the control of the intra-ocular pressure associated with glaucoma. The aim of this study is to develop and optimize self-assembled nanostructures of dorzolamide hydrochloride and L-α-Phosphatidylcholine to improve the pharmacokinetic parameters and extend the drug pharmacological action. Self-assembled nanostructures were prepared using a modified thin-film hydration technique. The formulae compositions were designed based on response surface statistical design. The prepared self-assembled nanostructures were characterized by testing their drug content, particle size, polydispersity index, zeta potential, partition coefficient, release half-life and extent. The optimized formulae having the highest drug content, zeta potential, partition coefficient, release half-life and extent with the lowest particle size and polydispersity index were subjected to further investigations including investigation of their physicochemical, morphological characteristics, in vivo pharmacokinetic and pharmacodynamic profiles. The optimized formulae were prepared at pH 8.7 (F5 and F6) and composed of L-α-Phosphatidylcholine and drug mixed in a ratio of 1:1 and 2:1 w/w, respectively. They showed significantly higher Cmax, AUC024 and AUC0∞ at the aqueous humor with extended control over the intra-ocular pressure, when compared to the marketed product; Trusopt®. The study introduced novel and promising self-assembled formulae able to permeate higher drug amount through the cornea and achieve sustained pharmacological effect at the site of action. PMID:29401498
da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro
2018-04-01
Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.
SU-E-I-43: Pediatric CT Dose and Image Quality Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, G; Singh, R
2014-06-01
Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose.more » Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.« less
Ahmed, Tarek A
2016-01-01
In this study, optimized freeze-dried finasteride nanoparticles (NPs) were prepared from drug nanosuspension formulation that was developed using the bottom–up technique. The effects of four formulation and processing variables that affect the particle size and solubility enhancement of the NPs were explored using the response surface optimization design. The optimized formulation was morphologically characterized using transmission electron microscopy (TEM). Physicochemical interaction among the studied components was investigated. Crystalline change was investigated using X-ray powder diffraction (XRPD). Crystal growth of the freeze-dried NPs was compared to the corresponding aqueous drug nanosuspension. Freeze-dried NPs formulation was subsequently loaded into hard gelatin capsules that were examined for in vitro dissolution and pharmacokinetic behavior. Results revealed that in most of the studied variables, some of the quadratic and interaction effects had a significant effect on the studied responses. TEM image illustrated homogeneity and shape of the prepared NPs. No interaction among components was noticed. XRPD confirmed crystalline state change in the optimized NPs. An enhancement in the dissolution rate of more than 2.5 times from capsules filled with optimum drug NPs, when compared to capsules filled with pure drug, was obtained. Crystal growth, due to Ostwald ripening phenomenon and positive Gibbs free energy, was reduced following lyophilization of the nanosuspension formulation. Pharmacokinetic parameters from drug NPs were superior to that of pure drug and drug microparticles. In conclusion, freeze-dried NPs based on drug nanosuspension formulation is a successful technique in enhancing stability, solubility, and in vitro dissolution of poorly water-soluble drugs with possible impact on the drug bioavailability. PMID:26893559
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, Shusuke, E-mail: shusuke-okada@aist.go.jp; Takagi, Kenta; Ozaki, Kimihiro
Submicron-sized Sm{sub 2}Fe{sub 17} powder samples were fabricated by a non-pulverizing process through reduction-diffusion of precursors prepared by a wet-chemical technique. Three precursors having different morphologies, which were micron-sized porous Sm-Fe oxide-impregnated iron nitrate, acicular goethite impregnated-samarium nitrate, and a conventional Sm-Fe coprecipitate, were prepared and subjected to hydrogen reduction and reduction-diffusion treatment to clarify whether these precursors could be convert to Sm{sub 2}Fe{sub 17} without impurity phases and which precursor is the most attractive for producing submicron-sized Sm{sub 2}Fe{sub 17} powder. As a result, all three precursors were successfully converted to Sm{sub 2}Fe{sub 17} powders without impurity phases, andmore » the synthesis route using iron-oxide particle-impregnated samarium oxide was revealed to have the greatest potential among the three routes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Vega, F F; Cantu-Paz, E; Lopez, J I
The population size of genetic algorithms (GAs) affects the quality of the solutions and the time required to find them. While progress has been made in estimating the population sizes required to reach a desired solution quality for certain problems, in practice the sizing of populations is still usually performed by trial and error. These trials might lead to find a population that is large enough to reach a satisfactory solution, but there may still be opportunities to optimize the computational cost by reducing the size of the population. This paper presents a technique called plague that periodically removes amore » number of individuals from the population as the GA executes. Recently, the usefulness of the plague has been demonstrated for genetic programming. The objective of this paper is to extend the study of plagues to genetic algorithms. We experiment with deceptive trap functions, a tunable difficult problem for GAs, and the experiments show that plagues can save computational time while maintaining solution quality and reliability.« less
Malhotra, Rajesh; Gaba, Sahil; Wahal, Naman; Kumar, Vijay; Srivastava, Deep N; Pandit, Hemant
2018-02-28
Oxford unicompartmental knee replacement (OUKR) has shown excellent long-term clinical outcomes as well as implant survival when used for correct indications with optimal surgical technique. Anteromedial osteoarthritis is highly prevalent in Indian patients, and OUKR is the ideal treatment option in such cases. Uncertainty prevails about the best method to determine femoral component size in OUKR. Preoperative templating has been shown to be inaccurate, while height- and gender-based guidelines based on European population might not apply to the Indian patients. Microplasty instrumentation introduced in 2012 introduced the sizing spoon, which has the dual function of femoral component sizing and determining the level of tibia cut. We aimed to check the accuracy of sizing spoon and also to determine whether the present guidelines are appropriate for use in the Indian patients. A total of 130 consecutive Oxford mobile bearing medial cemented UKR performed using the Microplasty instrumentation were included. The ideal femoral component size for each knee was recorded by looking for overhang and underhang in post-operative lateral knee radiograph. The accuracy of previous guidelines was determined by applying them to our study population. Previously published guidelines (which were based on Western population) proved to be accurate in only 37% of cases. Hence, based on the demographics of our study population, we formulated modified height- and gender-based guidelines, which would better suit the Indian population. Accuracy of modified guidelines was estimated to be 74%. The overall accuracy of sizing spoon (75%), when used as an intraoperative guide, was similar to that of modified guidelines. Existing guidelines for femoral component sizing do not work in Indian patients. Modified guidelines and use of intraoperative spoon should be used to choose the optimal implant size while performing OUKR in Indian patients. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Tian, Mingliang; Xu, Shengyong; Wang, Jinguo; Kumar, Nitesh; Wertz, Eric; Li, Qi; Campbell, Paul M; Chan, Moses H W; Mallouk, Thomas E
2005-04-01
A simple method for penetrating the barrier layer of an anodic aluminum oxide (AAO) film and for detaching the AAO film from residual Al foil was developed by reversing the bias voltage in situ after the anodization process is completed. With this technique, we have been able to obtain large pieces of free-standing AAO membranes with regular pore sizes of sub-10 nm. By combining Ar ion milling and wetting enhancement processes, Au nanowires were grown in the sub-10 nm pores of the AAO films. Further scaling down of the pore size and extension to the deposition of nanowires and nanotubes of materials other than Au should be possible by further optimizing this procedure.
NASA Astrophysics Data System (ADS)
Goncharov, K. A.; Denisov, I. A.
2017-10-01
The article considers the influence of the air gap size between the linear motor elements on the stability of the traction drive of the movement mechanism of the trolley of the bridge type crane. The main factors affecting the air gap size and the causes of their occurrence are described. The technique of calculating the magnitude of air gap variation is described in relation to the general deformation of the crane metal structure. Recommendations on the need for installation of additional equipment for load trolleys of various designs are given. The optimal values of the length of the trolley base are proposed. Observance of these values ensures normal operation of the traction drive.
Experimental study of acoustic agglomeration and fragmentation on coal-fired ash
NASA Astrophysics Data System (ADS)
Shen, Guoqing; Huang, Xiaoyu; He, Chunlong; Zhang, Shiping; An, Liansuo; Wang, Liang; Chen, Yanqiao; Li, Yongsheng
2018-02-01
As the major part of air pollution, inhalable particles, especially fine particles are doing great harm to human body due to smaller particle size and absorption of hazardous components. However, the removal efficiency of current particles filtering devices is low. Acoustic agglomeration is considered as a very effective pretreatment technique for removing particles. Fine particles collide, agglomerate and grow up in the sound field and the fine particles can be removed by conventional particles devices easily. In this paper, the agglomeration and fragmentation of 3 different kinds of particles with different size distributions are studied experimentally in the sound field. It is found that there exists an optimal frequency at 1200 Hz for different particles. The agglomeration efficiency of inhalable particles increases with SPL increasing for the unimodal particles with particle diameter less than 10 μm. For the bimodal particles, the optimal SPLs are 115 and 120 dB with the agglomeration efficiencies of 25% and 55%. A considerable effectiveness of agglomeration could only be obtained in a narrow SPL range and it decreases significantly over the range for the particles fragmentation.
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
NASA Astrophysics Data System (ADS)
Lu, Xuekun; Taiwo, Oluwadamilola O.; Bertei, Antonio; Li, Tao; Li, Kang; Brett, Dan J. L.; Shearing, Paul R.
2017-11-01
Effective microstructural properties are critical in determining the electrochemical performance of solid oxide fuel cells (SOFCs), particularly when operating at high current densities. A novel tubular SOFC anode with a hierarchical microstructure, composed of self-organized micro-channels and sponge-like regions, has been fabricated by a phase inversion technique to mitigate concentration losses. However, since pore sizes span over two orders of magnitude, the determination of the effective transport parameters using image-based techniques remains challenging. Pioneering steps are made in this study to characterize and optimize the microstructure by coupling multi-length scale 3D tomography and modeling. The results conclusively show that embedding finger-like micro-channels into the tubular anode can improve the mass transport by 250% and the permeability by 2-3 orders of magnitude. Our parametric study shows that increasing the porosity in the spongy layer beyond 10% enhances the effective transport parameters of the spongy layer at an exponential rate, but linearly for the full anode. For the first time, local and global mass transport properties are correlated to the microstructure, which is of wide interest for rationalizing the design optimization of SOFC electrodes and more generally for hierarchical materials in batteries and membranes.
Optimization of ultrahigh-speed multiplex PCR for forensic analysis.
Gibson-Daw, Georgiana; Crenshaw, Karin; McCord, Bruce
2018-01-01
In this paper, we demonstrate the design and optimization of an ultrafast PCR amplification technique, used with a seven-locus multiplex that is compatible with conventional capillary electrophoresis systems as well as newer microfluidic chip devices. The procedure involves the use of a high-speed polymerase and a rapid cycling protocol to permit multiplex PCR amplification of forensic short tandem repeat loci in 6.5 min. We describe the selection and optimization of master mix reagents such as enzyme, buffer, MgCl 2 , and dNTPs, as well as primer ratios, total volume, and cycle conditions, in order to get the best profile in the shortest time possible. Sensitivity and reproducibility studies are also described. The amplification process utilizes a small high-speed thermocycler and compact laptop, making it portable and potentially useful for rapid, inexpensive on-site genotyping. The seven loci of the multiplex were taken from conventional STR genotyping kits and selected for their size and lack of overlap. Analysis was performed using conventional capillary electrophoresis and microfluidics with fluorescent detection. Overall, this technique provides a more rapid method for rapid sample screening of suspects and victims. Graphical abstract Rapid amplification of forensic DNA using high speed thermal cycling followed by capillary or microfluidic electrophoresis.
Space Instrument Optimization by Implementing of Generic Three Bodies Circular Restricted Problem
NASA Astrophysics Data System (ADS)
Nejat, Cyrus
2011-01-01
In this study, the main discussion emphasizes on the spacecraft operation with a concentration on stationary points in space. To achieve these objectives, the circular restricted problem was solved for selected approaches. The equations of motion of three body restricted problem was demonstrated to apply in cases other than Lagrange's (1736-1813 A.D.) achievements, by means of the purposed CN (Cyrus Nejat) theorem along with appropriate comments. In addition to five Lagrange, two other points, CN1 and CN2 were found to be in unstable equilibrium points in a very large distance respect to Lagrange points, but stable at infinity. A very interesting simulation of Milky Way Galaxy and Andromeda Galaxy were created to find the Lagrange points, CN points (Cyrus Nejat Points), and CN lines (Cyrus Nejat Lines). The equations of motion were rearranged such a way that the transfer trajectory would be conical, by means of decoupling concept. The main objective was to make a halo orbit transfer about CN lines. The author purposes therefore that all of the corresponding sizing design that they must be developed by optimization techniques would be considered in future approaches. The optimization techniques are sufficient procedures to search for the most ideal response of a system.
NASA Astrophysics Data System (ADS)
Pandya, Samir; Tandel, Digisha; Chodavadiya, Nisarg
2018-05-01
CdS is one of the most important compounds in the II-VI group of semiconductor. There are numerous applications of CdS in the form of nanoparticles and nanocrystalline. Semiconductors nanoparticles (also known as quantum dots), belong to state of matter in the transition region between molecules and solids, have attracted a great deal of attention because of their unique electrical and optical properties, compared to bulk materials. In the field of optoelectronic, nanocrystalline form utilizes mostly in the field of catalysis and fluid technology. Considering these observations, presented work had been carried out, i.e. based on the nanocrystalline material preparation. In the present work CdS nano-crystalline powder was synthesized by a simple and cost effective chemical technique to grow cadmium sulphide (CdS) nanoparticles at 200 °C with different concentrations of cadmium. The synthesis parameters were optimized. The synthesized powder was structurally characterized by X-ray diffraction and particle size analyzer. In the XRD analysis, Micro-structural parameters such as lattice strain, dislocation density and crystallite size were analysed. The broadened diffraction peaks indicated nanocrystalline particles of the film material. In addition to that the size of the prepared particles was analyzed by particle size analyzer. The results show the average size of CdS particles ranging from 80 to 100 nm. The overall conclusion of the work can be very useful in the synthesis of nanocrystalline CdS powder.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
NASA Astrophysics Data System (ADS)
Charron, Luc; Harmer, Andrea; Lilge, Lothar
2005-09-01
A technique to produce fluorescent cell phantom standards based on calcium alginate microspheres with encapsulated fluorescein-labeled dextrans is presented. An electrostatic ionotropic gelation method is used to create the microspheres which are then exposed to an encapsulation method using poly-l-lysine to trap the dextrans inside. Both procedures were examined in detail to find the optimal parameters producing cell phantoms meeting our requirements. Size distributions favoring 10-20 microns microspheres were obtained by varying the high voltage and needle size parameters. Typical size distributions of the samples were centered at 150 μm diameter. Neither the molecular weight nor the charge of the dextrans had a significant effect on their retention in the microspheres, though anionic dextrans were chosen to help in future capillary electrophoresis work. Increasing the exposure time of the microspheres to the poly-l-lysine solution decreased the leakage rates of fluorescein-labeled dextrans.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
A generalized sizing method for revolutionary concepts under probabilistic design constraints
NASA Astrophysics Data System (ADS)
Nam, Taewoo
Internal combustion (IC) engines that consume hydrocarbon fuels have dominated the propulsion systems of air-vehicles for the first century of aviation. In recent years, however, growing concern over rapid climate changes and national energy security has galvanized the aerospace community into delving into new alternatives that could challenge the dominance of the IC engine. Nevertheless, traditional aircraft sizing methods have significant shortcomings for the design of such unconventionally powered aircraft. First, the methods are specialized for aircraft powered by IC engines, and thus are not flexible enough to assess revolutionary propulsion concepts that produce propulsive thrust through a completely different energy conversion process. Another deficiency associated with the traditional methods is that a user of these methods must rely heavily on experts' experience and advice for determining appropriate design margins. However, the introduction of revolutionary propulsion systems and energy sources is very likely to entail an unconventional aircraft configuration, which inexorably disqualifies the conjecture of such "connoisseurs" as a means of risk management. Motivated by such deficiencies, this dissertation aims at advancing two aspects of aircraft sizing: (1) to develop a generalized aircraft sizing formulation applicable to a wide range of unconventionally powered aircraft concepts and (2) to formulate a probabilistic optimization technique that is able to quantify appropriate design margins that are tailored towards the level of risk deemed acceptable to a decision maker. A more generalized aircraft sizing formulation, named the Architecture Independent Aircraft Sizing Method (AIASM), was developed for sizing revolutionary aircraft powered by alternative energy sources by modifying several assumptions of the traditional aircraft sizing method. Along with advances in deterministic aircraft sizing, a non-deterministic sizing technique, named the Probabilistic Aircraft Sizing Method (PASM), was developed. The method allows one to quantify adequate design margins to account for the various sources of uncertainty via the application of the chance-constrained programming (CCP) strategy to AIASM. In this way, PASM can also provide insights into a good compromise between cost and safety.
Ooi, Chia Huey; Chetty, Madhu; Teng, Shyh Wei
2006-06-23
Due to the large number of genes in a typical microarray dataset, feature selection looks set to play an important role in reducing noise and computational cost in gene expression-based tissue classification while improving accuracy at the same time. Surprisingly, this does not appear to be the case for all multiclass microarray datasets. The reason is that many feature selection techniques applied on microarray datasets are either rank-based and hence do not take into account correlations between genes, or are wrapper-based, which require high computational cost, and often yield difficult-to-reproduce results. In studies where correlations between genes are considered, attempts to establish the merit of the proposed techniques are hampered by evaluation procedures which are less than meticulous, resulting in overly optimistic estimates of accuracy. We present two realistically evaluated correlation-based feature selection techniques which incorporate, in addition to the two existing criteria involved in forming a predictor set (relevance and redundancy), a third criterion called the degree of differential prioritization (DDP). DDP functions as a parameter to strike the balance between relevance and redundancy, providing our techniques with the novel ability to differentially prioritize the optimization of relevance against redundancy (and vice versa). This ability proves useful in producing optimal classification accuracy while using reasonably small predictor set sizes for nine well-known multiclass microarray datasets. For multiclass microarray datasets, especially the GCM and NCI60 datasets, DDP enables our filter-based techniques to produce accuracies better than those reported in previous studies which employed similarly realistic evaluation procedures.
Tschauner, Sebastian; Marterer, Robert; Gübitz, Michael; Kalmar, Peter I; Talakic, Emina; Weissensteiner, Sabine; Sorantin, Erich
2016-02-01
Accurate collimation helps to reduce unnecessary irradiation and improves radiographic image quality, which is especially important in the radiosensitive paediatric population. For AP/PA chest radiographs in children, a minimal field size (MinFS) from "just above the lung apices" to "T12/L1" with age-dependent tolerance is suggested by the 1996 European Commission (EC) guidelines, which were examined qualitatively and quantitatively at a paediatric radiology division. Five hundred ninety-eight unprocessed chest X-rays (45% boys, 55% girls; mean age 3.9 years, range 0-18 years) were analysed with a self-developed tool. Qualitative standards were assessed based on the EC guidelines, as well as the overexposed field size and needlessly irradiated tissue compared to the MinFS. While qualitative guideline recommendations were satisfied, mean overexposure of +45.1 ± 18.9% (range +10.2% to +107.9%) and tissue overexposure of +33.3 ± 13.3% were found. Only 4% (26/598) of the examined X-rays completely fulfilled the EC guidelines. This study presents a new chest radiography quality control tool which allows assessment of field sizes, distances, overexposures and quality parameters based on the EC guidelines. Utilising this tool, we detected inadequate field sizes, inspiration depths, and patient positioning. Furthermore, some debatable EC guideline aspects were revealed. • European Guidelines on X-ray quality recommend exposed field sizes for common examinations. • The major failing in paediatric radiographic imaging techniques is inappropriate field size. • Optimal handling of radiographic units can reduce radiation exposure to paediatric patients. • Constant quality control helps ensure optimal chest radiographic image acquisition in children.
Size-Selected Ag Nanoparticles with Five-Fold Symmetry
2009-01-01
Silver nanoparticles were synthesized using the inert gas aggregation technique. We found the optimal experimental conditions to synthesize nanoparticles at different sizes: 1.3 ± 0.2, 1.7 ± 0.3, 2.5 ± 0.4, 3.7 ± 0.4, 4.5 ± 0.9, and 5.5 ± 0.3 nm. We were able to investigate the dependence of the size of the nanoparticles on the synthesis parameters. Our data suggest that the aggregation of clusters (dimers, trimer, etc.) into the active zone of the nanocluster source is the predominant physical mechanism for the formation of the nanoparticles. Our experiments were carried out in conditions that kept the density of nanoparticles low, and the formation of larges nanoparticles by coalescence processes was avoided. In order to preserve the structural and morphological properties, the impact energy of the clusters landing into the substrate was controlled, such that the acceleration energy of the nanoparticles was around 0.1 eV/atom, assuring a soft landing deposition. High-resolution transmission electron microscopy images showed that the nanoparticles were icosahedral in shape, preferentially oriented with a five-fold axis perpendicular to the substrate surface. Our results show that the synthesis by inert gas aggregation technique is a very promising alternative to produce metal nanoparticles when the control of both size and shape are critical for the development of practical applications. PMID:20596397
Size-selected ag nanoparticles with five-fold symmetry.
Gracia-Pinilla, Miguelángel; Ferrer, Domingo; Mejía-Rosales, Sergio; Pérez-Tijerina, Eduardo
2009-05-15
Silver nanoparticles were synthesized using the inert gas aggregation technique. We found the optimal experimental conditions to synthesize nanoparticles at different sizes: 1.3 ± 0.2, 1.7 ± 0.3, 2.5 ± 0.4, 3.7 ± 0.4, 4.5 ± 0.9, and 5.5 ± 0.3 nm. We were able to investigate the dependence of the size of the nanoparticles on the synthesis parameters. Our data suggest that the aggregation of clusters (dimers, trimer, etc.) into the active zone of the nanocluster source is the predominant physical mechanism for the formation of the nanoparticles. Our experiments were carried out in conditions that kept the density of nanoparticles low, and the formation of larges nanoparticles by coalescence processes was avoided. In order to preserve the structural and morphological properties, the impact energy of the clusters landing into the substrate was controlled, such that the acceleration energy of the nanoparticles was around 0.1 eV/atom, assuring a soft landing deposition. High-resolution transmission electron microscopy images showed that the nanoparticles were icosahedral in shape, preferentially oriented with a five-fold axis perpendicular to the substrate surface. Our results show that the synthesis by inert gas aggregation technique is a very promising alternative to produce metal nanoparticles when the control of both size and shape are critical for the development of practical applications.
Sol-gel antireflective spin-coating process for large-size shielding windows
NASA Astrophysics Data System (ADS)
Belleville, Philippe F.; Prene, Philippe; Mennechez, Francoise; Bouigeon, Christian
2002-10-01
The interest of the antireflective coatings applied onto large-area glass components increases everyday for the potential application such as building or shop windows. Today, because of the use of large size components, sol-gel process is a competitive way for antireflective coating mass production. The dip-coating technique commonly used for liquid-deposition, implies a safety hazard due to coating solution handling and storage in the case of large amounts of highly flammable solvent use. On the other hand, spin-coating is a liquid low-consumption technique. Mainly devoted to coat circular small-size substrate, we have developed a spin-coating machine able to coat large-size rectangular windows (up to 1 x 1.7 m2). Both solutions and coating conditions have been optimized to deposit optical layers with accurate and uniform thickness and to highly limit the edge effects. Experimental single layer antireflective coating deposition process onto large-area shielding windows (1000 x 1700 x 20 mm3) is described. Results show that the as-developed process could produce low specular reflection value (down to 1% one side) onto white-glass windows over the visible range (460-750 nm). Low-temperature curing process (120°C) used after sol-gel deposition enables antireflective-coating to withstand abrasion-resistance properties in compliance to US-MIL-C-0675C moderate test.
Impact of Company Size on Manufacturing Improvement Practices: An empirical study
NASA Astrophysics Data System (ADS)
Syan, C. S.; Ramoutar, K.
2014-07-01
There is a constant search for ways to achieve a competitive advantage through new manufacturing techniques. Best performing manufacturing companies tend to use world-class manufacturing (WCM) practices. Although the last few years have witnessed phenomenal growth in the use of WCM techniques, their effectiveness is not well understood specifically in the context of less developed countries. This paper presents an empirical study to investigate the impact of company size on improving manufacturing performance in manufacturing organizations based in Trinidad and Tobago (T&T). Empirical data were collected via a questionnaire survey which was send to 218 manufacturing firms in T&T. Five different company sizes and seven different industry sectors were studied. The analysis of survey data was performed with the aid of Statistical Package for Social Sciences (SPSS) software. The study signified facilitating and impeding factors towards improving manufacturing performance. Their relative impact/importance is dependent on varying company size and industry sectors. Findings indicate that T&T manufacturers are still practicing traditional approaches, when compared with world class manufacturers. In the majority of organizations, these practices were not 100% implemented even though they started the implementation process more than 5 years ago. The findings provided some insights in formulating more optimal operational strategies, and later develop action plans towards more effective implementation of WCM in T&T manufacturers.
Xiong, Yu; Georgieva, Radostina; Steffen, Axel; Smuda, Kathrin; Bäumler, Hans
2018-03-15
The Co-precipitation Crosslinking Dissolution technique (CCD-technique) allows a few-steps fabrication of particles composed of different biopolymers and bioactive agents under mild conditions. Morphology and properties of the fabricated biopolymer particles depend on the fabrication conditions, the nature of the biopolymers and additives, but also on the choice of the inorganic templates for co-precipitation. Here, we investigate the influence of an acidic biopolymer, hyaluronic acid (HA), on the formation of particles from bovine hemoglobin and bovine serum albumin applying co-precipitation with CaCO 3 and MnCO 3 . CaCO 3 templated biopolymer particles are almost spherical with particle size from 2 to 20 µm and protein entrapment efficiency from 13 to 77%. Presence of HA causes significant structural changes of the particles and decreasing protein entrapment efficiency. In contrast, MnCO 3 templated particles exhibit uniform peanut shape and submicron size with remarkably high protein entrapment efficiency of nearly 100%. Addition of HA has no influence on the protein entrapment efficiency or on morphology and size of the particles. These effects can be attributed to the strong interaction of Mn 2+ with proteins and much weaker interaction with HA. Therefore, entrapment efficiency, size and structure of biopolymer particles can be optimized by varying the mineral templates and additives. Copyright © 2017 Elsevier Inc. All rights reserved.
Genome size of Alexandrium catenella and Gracilariopsis lemaneiformis estimated by flow cytometry
NASA Astrophysics Data System (ADS)
Du, Qingwei; Sui, Zhenghong; Chang, Lianpeng; Wei, Huihui; Liu, Yuan; Mi, Ping; Shang, Erlei; Zeeshan, Niaz; Que, Zhou
2016-08-01
Flow cytometry (FCM) technique has been widely applied to estimating the genome size of various higher plants. However, there is few report about its application in algae. In this study, an optimized procedure of FCM was exploited to estimate the genome size of two eukaryotic algae. For analyzing Alexandrium catenella, an important red tide species, the whole cell instead of isolated nucleus was studied, and chicken erythrocytes were used as an internal reference. The genome size of A. catenella was estimated to be 56.48 ± 4.14 Gb (1C), approximately nineteen times larger than that of human genome. For analyzing Gracilariopsis lemaneiformis, an important economical red alga, the purified nucleus was employed, and Arabidopsis thaliana and Chondrus crispus were used as internal references, respectively. The genome size of Gp. lemaneiformis was 97.35 ± 2.58 Mb (1C) and 112.73 ± 14.00 Mb (1C), respectively, depending on the different internal references. The results of this research will promote the related studies on the genomics and evolution of these two species.
Passive acoustic measurement of bedload grain size distribution using self-generated noise
NASA Astrophysics Data System (ADS)
Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien
2018-01-01
Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Chopped random-basis quantum optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter
2018-03-20
Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Heinse, R.; Jones, S. B.; Bingham, G.; Bugbee, B.
2006-12-01
Rigorous management of restricted root zones utilizing coarse-textured porous media greatly benefits from optimizing the gas-water balance within plant-growth media. Geophysical techniques can help to quantify root- zone parameters like water content, air-filled porosity, temperature and nutrient concentration to better address the root systems performance. The efficiency of plant growth amid high root densities and limited volumes is critically linked to maintaining a favorable water content/air-filled porosity balance while considering adequate fluxes to replenish water at decreasing hydraulic conductivities during uptake. Volumes adjacent to roots also need to be optimized to provide adequate nutrients throughout the plant's life cycle while avoiding excessive salt concentrations. Our objectives were to (1) design and model an optimized root zone system using optimized porous media layers, (2) verify our design by monitoring the water content distribution and tracking nutrient release and transport, and (3) mimic water and nutrient uptake using plants or wicks to draw water from the root system. We developed a unique root-zone system using layered Ottawa sands promoting vertically uniform water contents and air-filled porosities. Watering was achieved by maintaining a shallow saturated layer at the bottom of the column and allowing capillarity to draw water upward, where coarser particle sizes formed the bottom layers with finer particles sizes forming the layers above. The depth of each layer was designed to optimize water content based on measurements and modeling of the wetting water retention curves. Layer boundaries were chosen to retain saturation between 50 and 85 percent. The saturation distribution was verified by dual-probe heat-pulse water-content sensors. The nutrient experiment involved embedding slow release fertilizer in the porous media in order to detect variations in electrical resistivity versus time during the release, diffusion and uptake of nutrients. The experiment required a specific geometry for the acquisition of ERT data using the heat-pulse water-content sensor's steel needles as electrodes. ERT data were analyzed using the sensed water contents and deriving pore-water resistivities using Archie's law. This design should provide a more optimal root-zone environment by maintaining a more uniform water content and on-demand supply of water than designs with one particle size at all column heights. The monitoring capability offers an effective means to describe the relationship between root-system performance and plant growth.
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
Design of optimized piezoelectric HDD-sliders
NASA Astrophysics Data System (ADS)
Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.
2010-04-01
As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.
NASA Technical Reports Server (NTRS)
Dudgeon, J. E.
1972-01-01
A computerized simulation of a planar phased array of circular waveguide elements is reported using mutual coupling and wide angle impedance matching in phased arrays. Special emphasis is given to circular polarization. The aforementioned computer program has as variable inputs: frequency, polarization, grid geometry, element size, dielectric waveguide fill, dielectric plugs in the waveguide for impedance matching, and dielectric sheets covering the array surface for the purpose of wide angle impedance matching. Parameter combinations are found which produce reflection peaks interior to grating lobes, while dielectric cover sheets are successfully employed to extend the usable scan range of a phased array. The most exciting results came from the application of computer aided optimization techniques to the design of this type of array.
A Heuristics Approach for Classroom Scheduling Using Genetic Algorithm Technique
NASA Astrophysics Data System (ADS)
Ahmad, Izah R.; Sufahani, Suliadi; Ali, Maselan; Razali, Siti N. A. M.
2018-04-01
Reshuffling and arranging classroom based on the capacity of the audience, complete facilities, lecturing time and many more may lead to a complexity of classroom scheduling. While trying to enhance the productivity in classroom planning, this paper proposes a heuristic approach for timetabling optimization. A new algorithm was produced to take care of the timetabling problem in a university. The proposed of heuristics approach will prompt a superior utilization of the accessible classroom space for a given time table of courses at the university. Genetic Algorithm through Java programming languages were used in this study and aims at reducing the conflicts and optimizes the fitness. The algorithm considered the quantity of students in each class, class time, class size, time accessibility in each class and lecturer who in charge of the classes.
Technical considerations for implementation of x-ray CT polymer gel dosimetry.
Hilts, M; Jirasek, A; Duzenli, C
2005-04-21
Gel dosimetry is the most promising 3D dosimetry technique in current radiation therapy practice. X-ray CT has been shown to be a feasible method of reading out polymer gel dosimeters and, with the high accessibility of CT scanners to cancer hospitals, presents an exciting possibility for clinical implementation of gel dosimetry. In this study we report on technical considerations for implementation of x-ray CT polymer gel dosimetry. Specifically phantom design, CT imaging methods, imaging time requirements and gel dose response are investigated. Where possible, recommendations are made for optimizing parameters to enhance system performance. The dose resolution achievable with an optimized system is calculated given voxel size and imaging time constraints. Results are compared with MRI and optical CT polymer gel dosimetry results available in the literature.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
NASA Astrophysics Data System (ADS)
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
Thatai, Purva; Sapra, Bharti
2017-08-01
The present study was aimed to optimize, develop, and evaluate microemulsion and microemulsion-based gel as a vehicle for transungual drug delivery of terbinafine hydrochloride for the treatment of onychomycosis. D-optimal mixture experimental design was adopted to optimize the composition of microemulsion having amount of oil (X 1 ), Smix (mixture of surfactant and cosurfactant; X 2 ), and water (X 3 ) as the independent variables. The formulations were assessed for permeation (micrograms per square centimeter per hour; Y 1 ), particle size (nanometer; Y 2 ), and solubility of the drug in the formulation (milligrams per milliliter; Y 3 ). The microemulsion containing 3.05% oil, 24.98% Smix, and 71.96% water was selected as the optimized formulation. The microemulsion-based gel showed better penetration (∼5 folds) as well as more retention (∼9 fold) in the animal hoof as compared to the commercial cream. The techniques used to screen penetration enhancers (hydration enhancement factor, ATR-FTIR, SEM, and DSC) revealed the synergistic effect of combination of urea and n-acetyl cysteine in disruption of the structure of hoof and hence, leading to enhanced penetration of drug.
Hunik, J H; Tramper, J
1993-01-01
Immobilization of biocatalysts in kappa-carrageenan gel beads is a widely used technique nowadays. Several methods are used to produce the gel beads. The gel-bead production rate is usually sufficient to make the relatively small quantities needed for bench-scale experiments. The droplet diameter can, within limits, be adjusted to the desired size, but it is difficult to predict because of the non-Newtonian fluid behavior of the kappa-carrageenan solution. Here we present the further scale-up of the extrusion technique with the theory to predict the droplet diameters for non-Newtonian fluids. The emphasis is on the droplet formation, which is the rate-limiting step in this extrusion technique. Uniform droplets were formed by breaking up a capillary jet with a sinusoidal signal of a vibration exciter. At the maximum production rate of 27.6 dm3/h, uniform droplets with a diameter of (2.1 +/- 0.12) x 10(-3) m were obtained. This maximum flow rate was limited by the power transfer of the vibration exciter to the liquid flow. It was possible to get a good prediction of the droplet diameter by estimating the local viscosity from shear-rate calculations and an experimental relation between the shear rate and viscosity. In this way the theory of Newtonian fluids could be used for the non-Newtonian kappa-carrageenan solution. The calculated optimal break-up frequencies and droplet sizes were in good agreement with those found in the experiments.
TH-C-12A-04: Dosimetric Evaluation of a Modulated Arc Technique for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsiamas, P; Czerminska, M; Makrigiorgos, G
2014-06-15
Purpose: A simplified Total Body Irradiation (TBI) was developed to work with minimal requirements in a compact linac room without custom motorized TBI couch. Results were compared to our existing fixed-gantry double 4 MV linac TBI system with prone patient and simultaneous AP/PA irradiation. Methods: Modulated arc irradiates patient positioned in prone/supine positions along the craniocaudal axis. A simplified inverse planning method developed to optimize dose rate as a function of gantry angle for various patient sizes without the need of graphical 3D treatment planning system. This method can be easily adapted and used with minimal resources. Fixed maximum fieldmore » size (40×40 cm2) is used to decrease radiation delivery time. Dose rate as a function of gantry angle is optimized to result in uniform dose inside rectangular phantoms of various sizes and a custom VMAT DICOM plans were generated using a DICOM editor tool. Monte Carlo simulations, film and ionization chamber dosimetry for various setups were used to derive and test an extended SSD beam model based on PDD/OAR profiles for Varian 6EX/ TX. Measurements were obtained using solid water phantoms. Dose rate modulation function was determined for various size patients (100cm − 200cm). Depending on the size of the patient arc range varied from 100° to 120°. Results: A PDD/OAR based beam model for modulated arc TBI therapy was developed. Lateral dose profiles produced were similar to profiles of our existing TBI facility. Calculated delivery time and full arc depended on the size of the patient (∼8min/ 100° − 10min/ 120°, 100 cGy). Dose heterogeneity varied by about ±5% − ±10% depending on the patient size and distance to the surface (buildup region). Conclusion: TBI using simplified modulated arc along craniocaudal axis of different size patients positioned on the floor can be achieved without graphical / inverse 3D planning.« less
Avadhani, Kiran S; Manikkath, Jyothsna; Tiwari, Mradul; Chandrasekhar, Misra; Godavarthi, Ashok; Vidya, Shimoga M; Hariharapura, Raghu C; Kalthur, Guruprasad; Udupa, Nayanabhirama; Mutalik, Srinivas
2017-11-01
The present work attempts to develop and statistically optimize transfersomes containing EGCG and hyaluronic acid to synergize the UV radiation-protective ability of both compounds, along with imparting antioxidant and anti-aging effects. Transfersomes were prepared by thin film hydration technique, using soy phosphatidylcholine and sodium cholate, combined with high-pressure homogenization. They were characterized with respect to size, polydispersity index, zeta potential, morphology, entrapment efficiency, Fourier Transform Infrared Spectroscopy (FTIR), Differential Scanning Calorimetry (DSC), X-ray Diffraction (XRD), in vitro antioxidant activity and ex vivo skin permeation studies. Cell viability, lipid peroxidation, intracellular ROS levels and expression of MMPs (2 and 9) were determined in human keratinocyte cell lines (HaCaT). The composition of the transfersomes was statistically optimized by Design of Experiments using Box-Behnken design with four factors at three levels. The optimized transfersome formulation showed vesicle size, polydispersity index and zeta potential of 101.2 ± 6.0 nm, 0.245 ± 0.069 and -44.8 ± 5.24 mV, respectively. FTIR and DSC showed no interaction between EGCG and the selected excipients. XRD results revealed no form conversion of EGCG in its transfersomal form. The optimized transfersomes were found to increase the cell viability and reduce the lipid peroxidation, intracellular ROS and expression of MMPs in HaCaT cells. The optimized transfersomal formulation of EGCG and HA exhibited considerably higher skin permeation and deposition of EGCG than that observed with plain EGCG. The results underline the potential application of the developed transfersomes in sunscreen cream/lotions for improvement of UV radiation-protection along with deriving antioxidant and anti-aging effects.
Wu, Xiao; Hayes, Don; Zwischenberger, Joseph B; Kuhn, Robert J; Mansour, Heidi M
2013-01-01
The aim of this study was to design, develop, and optimize respirable tacrolimus microparticles and nanoparticles and multifunctional tacrolimus lung surfactant mimic particles for targeted dry powder inhalation delivery as a pulmonary nanomedicine. Particles were rationally designed and produced at different pump rates by advanced spray-drying particle engineering design from organic solution in closed mode. In addition, multifunctional tacrolimus lung surfactant mimic dry powder particles were prepared by co-dissolving tacrolimus and lung surfactant mimic phospholipids in methanol, followed by advanced co-spray-drying particle engineering design technology in closed mode. The lung surfactant mimic phospholipids were 1,2-dipalmitoyl-sn-glycero-3-phosphocholine and 1,2-dipalmitoyl-sn-glycero-3-[phosphor-rac-1-glycerol]. Laser diffraction particle sizing indicated that the particle size distributions were suitable for pulmonary delivery, whereas scanning electron microscopy imaging indicated that these particles had both optimal particle morphology and surface morphology. Increasing the pump rate percent of tacrolimus solution resulted in a larger particle size. X-ray powder diffraction patterns and differential scanning calorimetry thermograms indicated that spray drying produced particles with higher amounts of amorphous phase. X-ray powder diffraction and differential scanning calorimetry also confirmed the preservation of the phospholipid bilayer structure in the solid state for all engineered respirable particles. Furthermore, it was observed in hot-stage micrographs that raw tacrolimus displayed a liquid crystal transition following the main phase transition, which is consistent with its interfacial properties. Water vapor uptake and lyotropic phase transitions in the solid state at varying levels of relative humidity were determined by gravimetric vapor sorption technique. Water content in the various powders was very low and well within the levels necessary for dry powder inhalation, as quantified by Karl Fisher coulometric titration. Conclusively, advanced spray-drying particle engineering design from organic solution in closed mode was successfully used to design and optimize solid-state particles in the respirable size range necessary for targeted pulmonary delivery, particularly for the deep lung. These particles were dry, stable, and had optimal properties for dry powder inhalation as a novel pulmonary nanomedicine. PMID:23403805
Deposition of Size-Selected Cu Nanoparticles by Inert Gas Condensation
2010-01-01
Nanometer size-selected Cu clusters in the size range of 1–5 nm have been produced by a plasma-gas-condensation-type cluster deposition apparatus, which combines a grow-discharge sputtering with an inert gas condensation technique. With this method, by controlling the experimental conditions, it was possible to produce nanoparticles with a strict control in size. The structure and size of Cu nanoparticles were determined by mass spectroscopy and confirmed by atomic force microscopy (AFM) and scanning electron transmission microscopy (STEM) measurements. In order to preserve the structural and morphological properties, the energy of cluster impact was controlled; the energy of acceleration of the nanoparticles was in near values at 0.1 ev/atom for being in soft landing regime. From SEM measurements developed in STEM-HAADF mode, we found that nanoparticles are near sized to those values fixed experimentally also confirmed by AFM observations. The results are relevant, since it demonstrates that proper optimization of operation conditions can lead to desired cluster sizes as well as desired cluster size distributions. It was also demonstrated the efficiency of the method to obtain size-selected Cu clusters films, as a random stacking of nanometer-size crystallites assembly. The deposition of size-selected metal clusters represents a novel method of preparing Cu nanostructures, with high potential in optical and catalytic applications. PMID:20652132
Engineering two-wire optical antennas for near field enhancement
NASA Astrophysics Data System (ADS)
Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun
2017-07-01
We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.
Summary of Optimization Techniques That Can Be Applied to Suspension System Design
DOT National Transportation Integrated Search
1973-03-01
Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
A COMPARISON OF GALAXY COUNTING TECHNIQUES IN SPECTROSCOPICALLY UNDERSAMPLED REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specian, Mike A.; Szalay, Alex S., E-mail: mspecia1@jhu.edu, E-mail: szalay@jhu.edu
2016-11-01
Accurate measures of galactic overdensities are invaluable for precision cosmology. Obtaining these measurements is complicated when members of one’s galaxy sample lack radial depths, most commonly derived via spectroscopic redshifts. In this paper, we utilize the Sloan Digital Sky Survey’s Main Galaxy Sample to compare seven methods of counting galaxies in cells when many of those galaxies lack redshifts. These methods fall into three categories: assigning galaxies discrete redshifts, scaling the numbers counted using regions’ spectroscopic completeness properties, and employing probabilistic techniques. We split spectroscopically undersampled regions into three types—those inside the spectroscopic footprint, those outside but adjacent to it,more » and those distant from it. Through Monte Carlo simulations, we demonstrate that the preferred counting techniques are a function of region type, cell size, and redshift. We conclude by reporting optimal counting strategies under a variety of conditions.« less
Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2015-01-01
An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
NASA Technical Reports Server (NTRS)
Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan
2012-01-01
The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.
Luo, Danmei; Rong, Qiguo; Chen, Quan
2017-09-01
Reconstruction of segmental defects in the mandible remains a challenge for maxillofacial surgery. The use of porous scaffolds is a potential method for repairing these defects. Now, additive manufacturing techniques provide a solution for the fabrication of porous scaffolds with specific geometrical shapes and complex structures. The goal of this study was to design and optimize a three-dimensional tetrahedral titanium scaffold for the reconstruction of mandibular defects. With a fixed strut diameter of 0.45mm and a mean cell size of 2.2mm, a tetrahedral structural porous scaffold was designed for a simulated anatomical defect derived from computed tomography (CT) data of a human mandible. An optimization method based on the concept of uniform stress was performed on the initial scaffold to realize a minimal-weight design. Geometric and mechanical comparisons between the initial and optimized scaffold show that the optimized scaffold exhibits a larger porosity, 81.90%, as well as a more homogeneous stress distribution. These results demonstrate that tetrahedral structural titanium scaffolds are feasible structures for repairing mandibular defects, and that the proposed optimization scheme has the ability to produce superior scaffolds for mandibular reconstruction with better stability, higher porosity, and less weight. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L
2014-01-01
Objectives The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Background Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Methods Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an “all-comers” basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Results Treatment with OA reduced pre-procedural stenosis from an average of 88–35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. Conclusion The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. © 2013 The Authors. Wiley Periodicals, Inc. PMID:23737432
Gangurde, Avinash Bhaskar; Sav, Ajay Kumar; Javeer, Sharadchandra Dagadu; Moravkar, Kailas K; Pawar, Jaywant N; Amin, Purnima D
2015-01-01
Introduction: Choline bitartrate (CBT) is a vital nutrient for fetal brain development and memory function. It is hygroscopic in nature which is associated with stability related problem during storage such as development of fishy odor and discoloration. Aim: Microencapsulation method was adopted to resolve the stability problem and for this hydrogenated soya bean oil (HSO) was used as encapsulating agent. Materials and Methods: Industrially feasible modified extrusion-spheronization technique was selected for microencapsulation. HSO was used as encapsulating agent, hydroxypropyl methyl cellulose E5/E15 as binder and microcrystalline cellulose as spheronization aid. Formulated pellets were evaluated for parameters such as flow property, morphological characteristics, hardness-friability index (HFI), drug content, encapsulation efficiency, and in vitro drug release. The optimized formulations were also characterized for particle size (by laser diffractometry), differential scanning calorimetry, powder X-ray diffractometry (PXRD), Fourier transform infrared spectroscopy, and scanning electron microscopy. Results and Discussions: The results from the study showed that coating of 90% and 60% CBT was successful with respect to all desired evaluation parameters. Optimized formulation was kept for 6 months stability study as per ICH guidelines, and there was no change in color, moisture content, drug content, and no fishy odor was observed. Conclusion: Microencapsulated pellets of CBT using HSO as encapsulating agent were developed using modified extrusion spheronization technique. Optimized formulations, CBT 90% (F5), and CBT 60% (F10), were found to be stable for 4M and 6M, respectively, at accelerated conditions. PMID:26682198
Training Scalable Restricted Boltzmann Machines Using a Quantum Annealer
NASA Astrophysics Data System (ADS)
Kumar, V.; Bass, G.; Dulny, J., III
2016-12-01
Machine learning and the optimization involved therein is of critical importance for commercial and military applications. Due to the computational complexity of many-variable optimization, the conventional approach is to employ meta-heuristic techniques to find suboptimal solutions. Quantum Annealing (QA) hardware offers a completely novel approach with the potential to obtain significantly better solutions with large speed-ups compared to traditional computing. In this presentation, we describe our development of new machine learning algorithms tailored for QA hardware. We are training restricted Boltzmann machines (RBMs) using QA hardware on large, high-dimensional commercial datasets. Traditional optimization heuristics such as contrastive divergence and other closely related techniques are slow to converge, especially on large datasets. Recent studies have indicated that QA hardware when used as a sampler provides better training performance compared to conventional approaches. Most of these studies have been limited to moderately-sized datasets due to the hardware restrictions imposed by exisitng QA devices, which make it difficult to solve real-world problems at scale. In this work we develop novel strategies to circumvent this issue. We discuss scale-up techniques such as enhanced embedding and partitioned RBMs which allow large commercial datasets to be learned using QA hardware. We present our initial results obtained by training an RBM as an autoencoder on an image dataset. The results obtained so far indicate that the convergence rates can be improved significantly by increasing RBM network connectivity. These ideas can be readily applied to generalized Boltzmann machines and we are currently investigating this in an ongoing project.
Zhou, Zhengzhen; Guo, Laodong
2015-06-19
Colloidal retention characteristics, recovery and size distribution of model macromolecules and natural dissolved organic matter (DOM) were systematically examined using an asymmetrical flow field-flow fractionation (AFlFFF) system under various membrane size cutoffs and carrier solutions. Polystyrene sulfonate (PSS) standards with known molecular weights (MW) were used to determine their permeation and recovery rates by membranes with different nominal MW cutoffs (NMWCO) within the AFlFFF system. Based on a ≥90% recovery rate for PSS standards by the AFlFFF system, the actual NMWCOs were determined to be 1.9 kDa for the 0.3 kDa membrane, 2.7 kDa for the 1 kDa membrane, and 33 kDa for the 10 kDa membrane, respectively. After membrane calibration, natural DOM samples were analyzed with the AFlFFF system to determine their colloidal size distribution and the influence from membrane NMWCOs and carrier solutions. Size partitioning of DOM samples showed a predominant colloidal size fraction in the <5 nm or <10 kDa size range, consistent with the size characteristics of humic substances as the main terrestrial DOM component. Recovery of DOM by the AFlFFF system, as determined by UV-absorbance at 254 nm, decreased significantly with increasing membrane NMWCO, from 45% by the 0.3 kDa membrane to 2-3% by the 10 kDa membrane. Since natural DOM is mostly composed of lower MW substances (<10 kDa) and the actual membrane cutoffs are normally larger than their manufacturer ratings, a 0.3 kDa membrane (with an actual NMWCO of 1.9 kDa) is highly recommended for colloidal size characterization of natural DOM. Among the three carrier solutions, borate buffer seemed to provide the highest recovery and optimal separation of DOM. Rigorous calibration with macromolecular standards and optimization of system conditions are a prerequisite for quantifying colloidal size distribution using the flow field-flow fractionation technique. In addition, the coupling of AFlFFF with fluorescence EEMs could provide new insights into DOM heterogeneity in different colloidal size fractions. Copyright © 2015 Elsevier B.V. All rights reserved.
Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.
Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei
2017-09-01
Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data
Kim, Sehwi
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674
Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.
2015-01-01
Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281
Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O
2014-11-01
Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.
Fetisova, Z G
2004-01-01
In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.
Kern, Maximilian M.; Guzy, Jacquelyn C.; Lovich, Jeffrey E.; Gibbons, J. Whitfield; Dorcas, Michael E.
2016-01-01
Because resources are finite, female animals face trade-offs between the size and number of offspring they are able to produce during a single reproductive event. Optimal egg size (OES) theory predicts that any increase in resources allocated to reproduction should increase clutch size with minimal effects on egg size. Variations of OES predict that egg size should be optimized, although not necessarily constant across a population, because optimality is contingent on maternal phenotypes, such as body size and morphology, and recent environmental conditions. We examined the relationships among body size variables (pelvic aperture width, caudal gap height, and plastron length), clutch size, and egg width of diamondback terrapins from separate but proximate populations at Kiawah Island and Edisto Island, South Carolina. We found that terrapins do not meet some of the predictions of OES theory. Both populations exhibited greater variation in egg size among clutches than within, suggesting an absence of optimization except as it may relate to phenotype/habitat matching. We found that egg size appeared to be constrained by more than just pelvic aperture width in Kiawah terrapins but not in the Edisto population. Terrapins at Edisto appeared to exhibit osteokinesis in the caudal region of their shells, which may aid in the oviposition of large eggs.
Javadrashid, Reza; Golamian, Masoud; Shahrzad, Maryam; Hajalioghli, Parisa; Shahmorady, Zahra; Fouladi, Daniel F; Sadrarhami, Shohreh; Akhoundzadeh, Leila
2017-05-01
The study sought to compare the usefulness of 4 imaging modalities in visualizing various intraorbital foreign bodies (IOFBs) in different sizes. Six different materials including metal, wood, plastic, stone, glass. and graphite were cut in cylindrical shapes in 4 sizes (dimensions: 0.5, 1, 2, and 3 mm) and placed intraorbitally in the extraocular space of fresh sheep's head. Four skilled radiologists rated the visibility of the objects individually using plain radiography, spiral computed tomography (CT), magnetic resonance imaging (MRI), and cone-beam computed tomography (CBCT) in accordance with a previously described grading system. Excluding wood, all embedded foreign bodies were best visualized in CT and CBCT images with almost equal accuracies. Wood could only be detected using MRI, and then only when fragments were more than 2 mm in size. There were 3 false-positive MRI reports, suggesting air bubbles as wood IOFBs. Because of lower cost and using less radiation in comparison with conventional CT, CBCT can be used as the initial imaging technique in cases with suspected IOFBs. Optimal imaging technique for wood IOFBs is yet to be defined. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Hindle, Michael
2011-01-01
Purpose The objective of this study was to investigate the hygroscopic growth of combination drug and excipient submicrometer aerosols for respiratory drug delivery using in vitro experiments and a newly developed computational fluid dynamics (CFD) model. Methods Submicrometer combination drug and excipient particles were generated experimentally using both the capillary aerosol generator and the Respimat inhaler. Aerosol hygroscopic growth was evaluated in vitro and with CFD in a coiled tube geometry designed to provide residence times and thermodynamic conditions consistent with the airways. Results The in vitro results and CFD predictions both indicated that the initially submicrometer particles increased in mean size to a range of 1.6–2.5 µm for the 50:50 combination of a non-hygroscopic drug (budesonide) and different hygroscopic excipients. CFD results matched the in vitro predictions to within 10% and highlighted gradual and steady size increase of the droplets, which will be effective for minimizing extrathoracic deposition and producing deposition deep within the respiratory tract. Conclusions Enhanced excipient growth (EEG) appears to provide an effective technique to increase pharmaceutical aerosol size, and the developed CFD model will provide a powerful design tool for optimizing this technique to produce high efficiency pulmonary delivery. PMID:21948458
Longest, P Worth; Hindle, Michael
2012-03-01
The objective of this study was to investigate the hygroscopic growth of combination drug and excipient submicrometer aerosols for respiratory drug delivery using in vitro experiments and a newly developed computational fluid dynamics (CFD) model. Submicrometer combination drug and excipient particles were generated experimentally using both the capillary aerosol generator and the Respimat inhaler. Aerosol hygroscopic growth was evaluated in vitro and with CFD in a coiled tube geometry designed to provide residence times and thermodynamic conditions consistent with the airways. The in vitro results and CFD predictions both indicated that the initially submicrometer particles increased in mean size to a range of 1.6-2.5 μm for the 50:50 combination of a non-hygroscopic drug (budesonide) and different hygroscopic excipients. CFD results matched the in vitro predictions to within 10% and highlighted gradual and steady size increase of the droplets, which will be effective for minimizing extrathoracic deposition and producing deposition deep within the respiratory tract. Enhanced excipient growth (EEG) appears to provide an effective technique to increase pharmaceutical aerosol size, and the developed CFD model will provide a powerful design tool for optimizing this technique to produce high efficiency pulmonary delivery.
Improving piezo actuators for nanopositioning tasks
NASA Astrophysics Data System (ADS)
Seeliger, Martin; Gramov, Vassil; Götz, Bernt
2018-02-01
In recent years, numerous applications emerged on the market with seemingly contradicting demands. On one side, the structure size decreased while on the other side, the overall sample size and speed of operation increased. Although the principle usage of piezoelectric positioning solutions has become a standard in the field of micro- and nanopositioning, surface inspection and manipulation, piezosystem jena now enhanced the performance beyond simple control loop tuning and actuator design. In automated manufacturing machines, a given signal has to be tracked fast and precise. However, control systems naturally decrease the ability to follow this signal in real time. piezosystem jena developed a new signal feed forward system bypassing the PID control. This way, we could reduce signal tracking errors by a factor of three compared to a conventionally optimized PID control. Of course, PID-values still have to be adjusted to specific conditions, e.g. changing additional mass, to optimize the performance. This can now be done with a new automatic tuning tool designed to analyze the current setup, find the best fitting configuration, and also gather and display theoretical as well as experimental performance data. Thus, the control quality of a mechanical setup can be improved within a few minutes without the need of external calibration equipment. Furthermore, new mechanical optimization techniques that focus not only on the positioning device, but also take the whole setup into account, prevent parasitic motion down to a few nanometers.
Abrego, Guadalupe; Alvarado, Helen L; Egea, Maria A; Gonzalez-Mira, Elizabeth; Calpena, Ana C; Garcia, Maria L
2014-10-01
Pranoprofen (PF)-loaded poly (lactic-co-glycolic) acid (PLGA) nanoparticles (NPs) were optimized and characterized as a means of exploring novel formulations to improve the biopharmaceutical profile of this drug. These systems were prepared using the solvent displacement technique, with polyvinyl alcohol (PVA) as a stabilizer. A factorial design was applied to study the influence of several factors (the pH of the aqueous phase and the stabilizer, polymer and drug concentrations) on the physicochemical properties of the NPs. After optimization, the study was performed at two different aqueous phase pH values (4.50 and 5.50), two concentrations of PF (1.00 and 1.50 mg/mL), three of PVA (5, 10, and 25 mg/mL), and two of PLGA (9.00 and 9.50 mg/mL). These conditions produced NPs of a size appropriate particle size for ocular administration (around 350 nm) and high entrapment efficiency (80%). To improve their stability, the optimized NPs were lyophilized. X-ray, FTIR, and differential scanning calorimetry analysis confirmed the drug was dispersed inside the particles. The release profiles of PF from the primary nanosuspensions and rehydrated freeze-dried NPs were similar and exhibited a sustained drug delivery pattern. The ocular tolerance was assessed by an HET-CAM test. No signs of ocular irritancy were detected (score 0). © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
In-situ implant containing PCL-curcumin nanoparticles developed using design of experiments.
Kasinathan, Narayanan; Amirthalingam, Muthukumar; Reddy, Neetinkumar D; Jagani, Hitesh V; Volety, Subrahmanyam M; Rao, Josyula Venkata
2016-01-01
Polymeric delivery system is useful in reducing pharmacokinetic limitations viz., poor absorption and rapid elimination associated with clinical use of curcumin. Design of experiment is a precise and cost effective tool useful in analyzing the effect of independent variables and their interaction on the product attributes. To evaluate the effect of process variables involved in preparation of curcumin-loaded polycaprolactone (PCL) nanoparticles (CPN). In the present experiment, CPNs were prepared by emulsification solvent evaporation technique. The effect of independent variables on the dependent variable was analyzed using design of experiments. Anticancer activity of CPN was studied using Ehrlich ascites carcinoma (EAC) model. In-situ implant was developed using PLGA as polymer. The effect of independent variables was studied in two stages. First, the effect of drug-polymer ratio, homogenization speed and surfactant concentration on size was studied using factorial design. The interaction of homogenization speed with homogenization time on mean particle size of CPN was then evaluated using central composite design. In the second stage, the effect of these variables (under the conditions optimized for producing particles <500 nm) on percentage drug encapsulation was evaluated using factorial design. CPN prepared under optimized conditions were able to control the development of EAC in Swiss albino mice and enhanced their survival time. PLGA based in-situ implant containing CPN prepared under optimized conditions showed sustained drug release. This implant could be further evaluated for pharmacological activities.
Mosmeri, Hamid; Alaie, Ebrahim; Shavandi, Mahmoud; Dastgheib, Seyed Mohammad Mehdi; Tasharrofi, Saeideh
2017-08-14
Nano-size calcium peroxide (nCaO 2 ) is an appropriate oxygen source which can meet the needs of in situ chemical oxidation (ISCO) for contaminant remediation from groundwater. In the present study, an easy to handle procedure for synthesis of CaO 2 nanoparticles has been investigated. Modeling and optimization of synthesis process was performed by application of response surface methodology (RSM) and central composite rotatable design (CCRD) method. Synthesized nanoparticles were characterized by XRD and FESEM techniques. The optimal synthesis conditions were found to be 5:1, 570 rpm and 10 °C for H 2 O 2 :CaSO 2 ratio, mixing rate and reaction temperature, respectively. Predicted values showed to be in good agreement with experimental results (R 2 values were 0.915 and 0.965 for CaO 2 weight and nanoparticle size, respectively). To study the efficiency of synthesized nanoparticles for benzene removal from groundwater, batch experiments were applied in biotic and abiotic (chemical removal) conditions by 100, 200, 400, and 800 mg/L of nanoparticles within 70 days. Results indicated that application of 400 mg/L of CaO 2 in biotic condition was able to remediate benzene completely from groundwater after 60 days. Furthermore, comparison of biotic and abiotic experiments showed a great potential of microbial stimulation using CaO 2 nanoparticles in benzene remediation from groundwater.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Chordwise implementation of pneumatic artificial muscles to actuate a trailing edge flap
NASA Astrophysics Data System (ADS)
Vocke, R. D., III; Kothera, C. S.; Wereley, N. M.
2018-07-01
This work describes the theoretical design and experimental validation of a rotorcraft-specific trailing edge flap powered by pneumatic artificial muscle actuators. The actuators in this work are co-located outboard on the rotor blade with the flap and arranged with a chordwise orientation where diameter and length restrictions can severely limit the operating range of the system. Techniques for addressing this configuration, such as introducing a bias contraction and mechanism optimization, are discussed and a numerical optimization is performed for an actuation system sized for implementation on a medium utility helicopter rotor. The optimized design achieves ±10° of deflection at 1/rev, and maintains at least ±2° half peak-to-peak deflection out to 10/rev, indicating that the system has the actuation authority and bandwidth necessary for both primary control and vibration/noise reduction. Portions of this paper were presented at the AHS 70th Annual Forum, Montréal, Québec, Canada, May 20–22, 2014.
User's manual for the BNW-I optimization code for dry-cooled power plants. [AMCIRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Daniel, D.J.; De Mier, W.V.
1977-01-01
This appendix provides a listing, called Program AMCIRC, of the BNW-1 optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using ammonia flow in the heat exchanger tubes. The optimum design is determined by repeating the design of the cooling system over a range of design conditions in order to find the cooling system with the smallest incremental cost. This is accomplished by varying five parameters of the plant and cooling system over ranges of values. These parameters are varied systematically according to techniques that perform pattern and gradient searches. The dry coolingmore » system optimized by program AMCIRC is composed of a condenser/reboiler (condensation of steam and boiling of ammonia), piping system (transports ammonia vapor out and ammonia liquid from the dry cooling towers), and circular tower system (vertical one-pass heat exchangers situated in circular configurations with cocurrent ammonia flow in the tubes of the heat exchanger). (LCL)« less
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
Machine learning for medical images analysis.
Criminisi, A
2016-10-01
This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Small-angle scattering from 3D Sierpinski tetrahedron generated using chaos game
NASA Astrophysics Data System (ADS)
Slyamov, Azat
2017-12-01
We approximate a three dimensional version of deterministic Sierpinski gasket (SG), also known as Sierpinski tetrahedron (ST), by using the chaos game representation (CGR). Structural properties of the fractal, generated by both deterministic and CGR algorithms are determined using small-angle scattering (SAS) technique. We calculate the corresponding monodisperse structure factor of ST, using an optimized Debye formula. We show that scattering from CGR of ST recovers basic fractal properties, such as fractal dimension, iteration number, scaling factor, overall size of the system and the number of units composing the fractal.
A coherent discrete variable representation method on a sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hua -Gen
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A coherent discrete variable representation method on a sphere
Yu, Hua -Gen
2017-09-05
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
Optimal design of neural stimulation current waveforms.
Halpern, Mark
2009-01-01
This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.
Quantum Support Vector Machine for Big Data Classification
NASA Astrophysics Data System (ADS)
Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth
2014-09-01
Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.
The detection of fatigue cracks by nondestructive testing methods
NASA Technical Reports Server (NTRS)
Rummel, W. D.; Todd, P. H., Jr.; Frecska, S. A.; Rathke, R. A.
1974-01-01
X-radiographic penetrant, ultrasonic, eddy current, holographic, and acoustic emission techniques were optimized and applied to the evaluation of 2219-T87 aluminum alloy test specimens. One hundred eighteen specimens containing a total of 328 fatigue cracks were evaluated. The cracks ranged in length from 0.500 inch (1.27 cm) to 0.007 inch (0.018 cm) and in depth from 0.178 inch (0.451 cm) and 0.001 inch (0.003 cm). Specimen thicknesses were nominally 0.060 inch (0.152 cm) and 0.210 inch (0.532 cm) and surface finishes were nominally 32 and 125 rms and 64 and 200 rms respectively. Specimens were evaluated in the as-milled surface condition, in the chemically milled surface condition and, after proof loading, in a randomized inspection sequence. Results of the nondestructive test (NDT) evaluations were compared with actual crack size obtained by measurement of the fractured specimens. Inspection data was then analyzed to provide a statistical basis for determinating the threshold crack detection sensitivity (the largest crack size that would be missed) for each of the inspection techniques at a 95% probability and 95% confidence level.
High-resolution non-destructive three-dimensional imaging of integrated circuits.
Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel
2017-03-15
Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.
Gravitational wave searches with pulsar timing arrays: Cancellation of clock and ephemeris noises
NASA Astrophysics Data System (ADS)
Tinto, Massimo
2018-04-01
We propose a data processing technique to cancel monopole and dipole noise sources (such as clock and ephemeris noises, respectively) in pulsar timing array searches for gravitational radiation. These noises are the dominant sources of correlated timing fluctuations in the lower-part (≈10-9-10-8 Hz ) of the gravitational wave band accessible by pulsar timing experiments. After deriving the expressions that reconstruct these noises from the timing data, we estimate the gravitational wave sensitivity of our proposed processing technique to single-source signals to be at least one order of magnitude higher than that achievable by directly processing the timing data from an equal-size array. Since arrays can generate pairs of clock and ephemeris-free timing combinations that are no longer affected by correlated noises, we implement with them the cross-correlation statistic to search for an isotropic stochastic gravitational wave background. We find the resulting optimal signal-to-noise ratio to be more than one order of magnitude larger than that obtainable by correlating pairs of timing data from arrays of equal size.
Highly porous 3D nanofiber scaffold using an electrospinning technique.
Kim, Geunhyung; Kim, WanDoo
2007-04-01
A successful 3D tissue-engineering scaffold must have a highly porous structure and good mechanical stability. High porosity and optimally designed pore size provide structural space for cell accommodation and migration and enable the exchange of nutrients between the scaffold and environment. Poly(epsilon-carprolactone) fibers were electrospun using an auxiliary electrode and chemical blowing agent (BA), and characterized according to porosity, pore size, and their mechanical properties. We also investigated the effect of the BA on the electrospinning processability. The growth characteristic of human dermal fibroblasts cells cultured in the webs showed the good adhesion with the blown web relative to a normal electrospun mat. The blown nanofiber web had good tensile properties and high porosity compared to a typical electrospun nanofiber scaffold. (c) 2006 Wiley Periodicals, Inc.
Impact of dynamic distribution of floc particles on flocculation effect.
Nan, Jun; He, Weipeng; Song, Xinin; Li, Guibai
2009-01-01
Polyaluminum chloride (PAC) was used as coagulant and suspended particles in kaolin water. Online instruments including turbidimeter and particle counter were used to monitor the flocculation process. An evaluation model for demonstrating the impact on the flocculation effect was established based on the multiple linear regression analysis method. The parameter of the index weight of channels quantitatively described how the variation of floc particle population in different size ranges cause the decrement of turbidity. The study showed that the floc particles in different size ranges contributed differently to the decrease of turbidity and that the index weight of channel could excellently indicate the impact degree of floc particles dynamic distribution on flocculation effect. Therefore, the parameter may significantly benefit the development of coagulation and sedimentation techniques as well as the optimal coagulant selection.
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
Constituents of Quality of Life and Urban Size
ERIC Educational Resources Information Center
Royuela, Vicente; Surinach, Jordi
2005-01-01
Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mardechay
1992-01-01
The purpose of the research project was to continue the development of new methods for efficient aeroservoelastic analysis and optimization. The main targets were as follows: to complete the development of analytical tools for the investigation of flutter with large stiffness changes; to continue the work on efficient continuous gust response and sensitivity derivatives; and to advance the techniques of calculating dynamic loads with control and unsteady aerodynamic effects. An efficient and highly accurate mathematical model for time-domain analysis of flutter during which large structural changes occur was developed in cooperation with Carol D. Wieseman of NASA LaRC. The model was based on the second-year work 'Modal Coordinates for Aeroelastic Analysis with Large Local Structural Variations'. The work on continuous gust response was completed. An abstract of the paper 'Continuous Gust Response and Sensitivity Derivatives Using State-Space Models' was submitted for presentation in the 33rd Israel Annual Conference on Aviation and Astronautics, Feb. 1993. The abstract is given in Appendix A. The work extends the optimization model to deal with continuous gust objectives in a way that facilitates their inclusion in the efficient multi-disciplinary optimization scheme. Currently under development is a work designed to extend the analysis and optimization capabilities to loads and stress considerations. The work is on aircraft dynamic loads in response to impulsive and non-impulsive excitation. The work extends the formulations of the mode-displacement and summation-of-forces methods to include modes with significant local distortions, and load modes. An abstract of the paper,'Structural Dynamic Loads in Response to Impulsive Excitation' is given in appendix B. Another work performed this year under the Grant was 'Size-Reduction Techniques for the Determination of Efficient Aeroservoelastic Models' given in Appendix C.
Constraining the atmosphere of GJ 1214b using an optimal estimation technique
NASA Astrophysics Data System (ADS)
Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Fletcher, L. N.; Lee, J.-M.
2013-09-01
We explore cloudy, extended H2-He atmosphere scenarios for the warm super-Earth GJ 1214b using an optimal estimation retrieval technique. This planet, orbiting an M4.5 star only 13 pc from the Earth, is of particular interest because it lies between the Earth and Neptune in size and may be a member of a new class of planet that is neither terrestrial nor gas giant. Its relatively flat transmission spectrum has so far made atmospheric characterization difficult. The Non-linear optimal Estimator for MultivariateE spectral analySIS (NEMESIS) algorithm is used to explore the degenerate model parameter space for a cloudy, H2-He-dominated atmosphere scenario. Optimal estimation is a data-led approach that allows solutions beyond the range permitted by ab initio equilibrium model atmosphere calculations, and as such prevents restriction from prior expectations. We show that optimal estimation retrieval is a powerful tool for this kind of study, and present an exploration of the degenerate atmospheric scenarios for GJ 1214b. Whilst we find a family of solutions that provide a very good fit to the data, the quality and coverage of these data are insufficient for us to more precisely determine the abundances of cloud and trace gases given an H2-He atmosphere, and we also cannot rule out the possibility of a high molecular weight atmosphere. Future ground- and space-based observations will provide the opportunity to confirm or rule out an extended H2-He atmosphere, but more precise constraints will be limited by intrinsic degeneracies in the retrieval problem, such as variations in cloud top pressure and temperature.
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
Pixel-based OPC optimization based on conjugate gradients.
Ma, Xu; Arce, Gonzalo R
2011-01-31
Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.
Karashima, Masatoshi; Kimoto, Kouya; Yamamoto, Katsuhiko; Kojima, Takashi; Ikeda, Yukihiro
2016-10-01
The aim of the present study was to develop a novel solubilization technique consisting of a nano-cocrystal suspension by integrating cocrystal and nanocrystal formulation technologies to maximize solubilization over current solubilizing technologies. Monodisperse carbamazepine-saccharin, indomethacin-saccharin, and furosemide-caffeine nano-cocrystal suspensions, as well as a furosemide-cytosine nano-salt suspension, were successfully prepared with particle sizes of less than 300nm by wet milling with the stabilizers hydroxypropyl methylcellulose and sodium dodecyl sulfate. Interestingly, the properties of resultant nano-cocrystal suspensions were dramatically changed depending on the physicochemical and structural properties of the cocrystals. In the formulation optimization, the concentration and ratio of the stabilizers also influenced the zeta potentials and particles sizes of the resultant nano-cocrystal suspensions. Raman spectroscopic analysis revealed that the crystalline structures of the cocrystals were maintained in the nanosuspensions, and were physically stable for at least one month. Furthermore, their dissolution profiles were significantly improved over current solubilization-enabling technologies, nanocrystals, and cocrystals. In the present study, we demonstrated that nano-cocrystal formulations can be a new promising option for solubilization techniques to improve the absorption of poorly soluble drugs, and can expand the development potential of poorly soluble candidates in the pharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mai, Sebastian; Marquetand, Philipp; González, Leticia
2014-08-21
An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less
Nageeb El-Helaly, Sara; Habib, Basant A; Abd El-Rahman, Mohamed K
2018-07-01
This study aims to investigate factors affecting weakly basic drugs liposomal systems. Resolution V fractional factorial design (2 V 5-1 ) is used as an example of screening designs that would better be used as a wise step before proceeding with detailed factors effects or optimization studies. Five factors probable to affect liposomal systems of weakly basic drugs were investigated using Amisulpride as a model drug. Factors studied were; A: Preparation technique B: Phosphatidyl choline (PhC) amount (mg) C: Cholesterol: PhC molar ratio, D: Hydration volume (ml) and E: Sonication type. Levels investigated were; Ammonium sulphate-pH gradient technique or Transmembrane zinc chelation-pH gradient technique, 200 or 400 mg, 0 or 0.5, 10 or 20 ml and bath or probe sonication for A, B, C, D and E respectively. Responses measured were Particle size (PS) (nm), Zeta potential (ZP) (mV) and Entrapment efficiency percent (EE%). Ion selective electrode was used as a novel method for measuring unentrapped drug concentration and calculating entrapment efficiency without the need for liposomal separation. Factors mainly affecting the studied responses were Cholesterol: PhC ratio and hydration volume for PS, preparation technique for ZP and preparation technique and hydration volume for EE%. The applied 2 V 5-1 design enabled the use of only 16 trial combinations for screening the influence of five factors on weakly basic drugs liposomal systems. This clarifies the value of the use of screening experiments before extensive investigation of certain factors in detailed optimization studies. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bostaph, Ekaterina
This research aimed to study the potential for breaking through object size limitations of current X-ray computed tomography (CT) systems by implementing a limited angle scanning technique. CT stands out among other industrial nondestructive inspection (NDI) methods due to its unique ability to perform 3D volumetric inspection, unmatched micro-focus resolution, and objectivity that allows for automated result interpretation. This work attempts to advance NDI technique to enable microstructural material characterization and structural diagnostics of composite structures, where object sizes often prohibit the application of full 360° CT. Even in situations where the objects can be accommodated within existing micro-CT configuration, achieving sufficient magnification along with full rotation may not be viable. An effort was therefore made to achieve high-resolution scans from projection datasets with limited angular coverage (less than 180°) by developing effective reconstruction algorithms in conjunction with robust scan acquisition procedures. Internal features of inspected objects barely distinguishable in a 2D X-ray radiograph can be enhanced by additional projections that are reconstructed to a stack of slices, dramatically improving depth perception, a technique referred to as digital tomosynthesis. Building on the success of state-of-the-art medical tomosynthesis systems, this work sought to explore the feasibility of this technique for composite structures in aerospace applications. The challenge lies in the fact that the slices generated in medical tomosynthesis are too thick for relevant industrial applications. In order to adapt this concept to composite structures, reconstruction algorithms were expanded by implementation of optimized iterative stochastic methods (capable of reducing noise and refining scan quality) which resulted in better depth perception. The optimal scan acquisition procedure paired with the improved reconstruction algorithm facilitated higher in-plane and depth resolution compared to the clinical application. The developed limited angle tomography technique was demonstrated to be able to detect practically significant manufacturing defects (voids) and structural damage (delaminations) critical to structural integrity of composite parts. Keeping in mind the intended real-world aerospace applications where objects often have virtually unlimited in-plane dimensions, the developed technique of partial scanning could potentially extend the versatility of CT-based inspection and enable game changing NDI systems.
The evolution of island gigantism and body size variation in tortoises and turtles
Jaffe, Alexander L.; Slater, Graham J.; Alfaro, Michael E.
2011-01-01
Extant chelonians (turtles and tortoises) span almost four orders of magnitude of body size, including the startling examples of gigantism seen in the tortoises of the Galapagos and Seychelles islands. However, the evolutionary determinants of size diversity in chelonians are poorly understood. We present a comparative analysis of body size evolution in turtles and tortoises within a phylogenetic framework. Our results reveal a pronounced relationship between habitat and optimal body size in chelonians. We found strong evidence for separate, larger optimal body sizes for sea turtles and island tortoises, the latter showing support for the rule of island gigantism in non-mammalian amniotes. Optimal sizes for freshwater and mainland terrestrial turtles are similar and smaller, although the range of body size variation in these forms is qualitatively greater. The greater number of potential niches in freshwater and terrestrial environments may mean that body size relationships are more complicated in these habitats. PMID:21270022
Curcumin loaded pH-sensitive nanoparticles for the treatment of colon cancer.
Prajakta, Dandekar; Ratnesh, Jain; Chandan, Kumar; Suresh, Subramanian; Grace, Samuel; Meera, Venkatesh; Vandana, Patravale
2009-10-01
The investigation was aimed at designing pH-sensitive, polymeric nanoparticles of curcumin, a natural anti-cancer agent, for the treatment of colon cancer. The objective was to enhance the bioavailability of curcumin, simultaneously reducing the required dose through selective targeting to colon. Eudragit S100 was chosen to aid targeting since the polymer dissolves at colonic pH to result in selective colonic release of the entrapped drug. Solvent emulsion-evaporation technique was employed to formulate the nanoparticles. Various process parameters were optimized and the optimized formulation was evaluated for particle size distribution and encapsulation efficiency before subjecting to freeze-drying. The freeze dried product was characterized for particle size, drug content, DSC studies, particle morphology. Anti-cancer potential of the formulation was demonstrated by MTT assay in HT-29 cell line. Nanometric, homogeneous, spherical particles were obtained with encapsulation efficiency of 72%. Freeze-dried nanoparticles exhibited a negative surface charge, drug content of > 99% and presence of drug in amorphous form which may result in possible enhanced absorption. MTT assay demonstrated almost double inhibition of the cancerous cells by nanoparticles, as compared to curcumin alone, at the concentrations tested. Enhanced action may be attributed to size influenced improved cellular uptake, and may result in reduction of overall dose requirement. Results indicate the potential for in vivo studies to establish the clinical application of the formulation.
Yang, Fei; Chen, Chen; Zhou, QianRong; Gong, YiMing; Li, RuiXue; Li, ChiChi; Klämpfl, Florian; Freund, Sebastian; Wu, XingWen; Sun, Yang; Li, Xiang; Schmidt, Michael; Ma, Duan; Yu, YouCheng
2017-01-01
Fabricating Ti alloy based dental implants with defined porous scaffold structure is a promising strategy for improving the osteoinduction of implants. In this study, we use Laser Beam Melting (LBM) 3D printing technique to fabricate porous Ti6Al4V dental implant prototypes with three controlled pore sizes (200, 350 and 500 μm). The mechanical stress distribution in the surrounding bone tissue is characterized by photoelastography and associated finite element simulation. For in-vitro studies, experiments on implants’ biocompatibility and osteogenic capability are conducted to evaluate the cellular response correlated to the porous structure. As the preliminary results, porous structured implants show a lower stress-shielding to the surrounding bone at the implant neck and a more densed distribution at the bottom site compared to the reference implant. From the cell proliferation tests and the immunofluorescence images, 350 and 500 μm pore sized implants demonstrate a better biocompatibility in terms of cell growth, migration and adhesion. Osteogenic genes expression of the 350 μm group is significantly increased alone with the ALP activity test. All these suggest that a pore size of 350 μm provides an optimal provides an optimal potential for improving the mechanical shielding to the surrounding bones and osteoinduction of the implant itself. PMID:28350007
NASA Astrophysics Data System (ADS)
Yang, Fei; Chen, Chen; Zhou, Qianrong; Gong, Yiming; Li, Ruixue; Li, Chichi; Klämpfl, Florian; Freund, Sebastian; Wu, Xingwen; Sun, Yang; Li, Xiang; Schmidt, Michael; Ma, Duan; Yu, Youcheng
2017-03-01
Fabricating Ti alloy based dental implants with defined porous scaffold structure is a promising strategy for improving the osteoinduction of implants. In this study, we use Laser Beam Melting (LBM) 3D printing technique to fabricate porous Ti6Al4V dental implant prototypes with three controlled pore sizes (200, 350 and 500 μm). The mechanical stress distribution in the surrounding bone tissue is characterized by photoelastography and associated finite element simulation. For in-vitro studies, experiments on implants’ biocompatibility and osteogenic capability are conducted to evaluate the cellular response correlated to the porous structure. As the preliminary results, porous structured implants show a lower stress-shielding to the surrounding bone at the implant neck and a more densed distribution at the bottom site compared to the reference implant. From the cell proliferation tests and the immunofluorescence images, 350 and 500 μm pore sized implants demonstrate a better biocompatibility in terms of cell growth, migration and adhesion. Osteogenic genes expression of the 350 μm group is significantly increased alone with the ALP activity test. All these suggest that a pore size of 350 μm provides an optimal provides an optimal potential for improving the mechanical shielding to the surrounding bones and osteoinduction of the implant itself.
Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija
2015-07-15
Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang
2016-05-01
In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.
Pandey, Sonia; Patel, Payal; Gupta, Arti
2018-05-21
In the present investigation a factorial design approach attempt was applied to develop the solid lipid nanoparticles (SLN) of Glibenclamide (GLB) a poorly water-soluble drug (BCS -II) used in the treatment of type 2 diabetes. Prime objectives of this experiment are to optimize the SLN formulation of Glibenclamide and improve the therapeutic effectiveness of the developed formulation. Glibenclamide loaded SLNs (GLB-SLN) were fabricated by High speed homogenization technique. A 32-factorial design approach has been employed to assess the influence of two independent variables, namely amount of Poloxamer 188 and Glyceryl Monostearate on entrapment efficiency (% EE) (Y1), Particle Size (nm) (Y2), % drug release at 8hr Q8 (Y3) and 24 hr Q24 (Y4) of prepared SLNs. Differential scanning calorimetry analysis revealed the compatibility of the drug into lipid matrix with surfactant, while Transmission electron and Scanning electron microscopy studies indicated the size and shape of SLN. The entrapment efficiency, particle size, Q8 and Q24 of the optimized SLNs were 88.93%, 125 nm, 31.12±0.951% and 86.07±1.291% respectively. Optimized GLB-SLN formula was derived from an overlay plot. Three dimensional response surface plots and regression equations confirmed the corresponding influence of selected independent variables on measured responses. In vivo testing of the GLB-SLN in diabetic albino rats demonstrated significant antidiabetic effect of GLB-SLN. The hypoglycemic effect obtained by GLB-SLN remained significantly higher than that given by drug alone and marketed formulation, further confirming the higher therapeutic effectiveness of the GLB-SLN formulation. Our findings suggested the feasibility of investigated system for oral administration of Glibenclamide. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Optimal Body Size and Limb Length Ratios Associated with 100-m Personal-Best Swim Speeds.
Nevill, Alan M; Oxford, Samuel W; Duncan, Michael J
2015-08-01
This study aims to identify optimal body size and limb segment length ratios associated with 100-m personal-best (PB) swim speeds in children and adolescents. Fifty national-standard youth swimmers (21 males and 29 females age 11-16 yr; mean ± SD age, 13.5 ± 1.5 yr) participated in the study. Anthropometry comprised stature; body mass; skinfolds; maturity offset; upper arm, lower arm, and hand lengths; and upper leg, lower leg, and foot lengths. Swimming performance was taken as the PB time recorded in competition for the 100-m freestyle swim. To identify the optimal body size and body composition components associated with 100-m PB swim speeds (having controlled for age and maturity offset), we adopted a multiplicative allometric log-linear regression model, which was refined using backward elimination. Lean body mass was the singularly most important whole-body characteristic. Stature and body mass did not contribute to the model, suggesting that the advantage of longer levers was limb-specific rather than a general whole-body advantage. The allometric model also identified that having greater limb segment length ratios [i.e., arm ratio = (low arm)/(upper arm); foot-to-leg ratio = (foot)/(lower leg)] was key to PB swim speeds. It is only by adopting multiplicative allometric models that the above mentioned ratios could have been derived. The advantage of having a greater lower arm is clear; however, having a shorter upper arm (achieved by adopting a closer elbow angle technique or by possessing a naturally endowed shorter upper arm), at the same time, is a new insight into swimming performance. A greater foot-to-lower-leg ratio suggests that a combination of larger feet and shorter lower leg length may also benefit PB swim speeds.
Ogunwuyi, O; Adesina, S; Akala, E O
2015-03-01
We report here our efforts on the development of stealth biodegradable crosslinked poly-ε-caprolactone nanoparticles by free radical dispersion polymerization suitable for the delivery of bioactive agents. The uniqueness of the dispersion polymerization technique is that it is surfactant free, thereby obviating the problems known to be associated with the use of surfactants in the fabrication of nanoparticles for biomedical applications. Aided by a statistical software for experimental design and analysis, we used D-optimal mixture statistical experimental design to generate thirty batches of nanoparticles prepared by varying the proportion of the components (poly-ε-caprolactone macromonomer, crosslinker, initiators and stabilizer) in acetone/water system. Morphology of the nanoparticles was examined using scanning electron microscopy (SEM). Particle size and zeta potential were measured by dynamic light scattering (DLS). Scheffe polynomial models were generated to predict particle size (nm) and particle surface zeta potential (mV) as functions of the proportion of the components. Solutions were returned from simultaneous optimization of the response variables for component combinations to (a) minimize nanoparticle size (small nanoparticles are internalized into disease organs easily, avoid reticuloendothelial clearance and lung filtration) and (b) maximization of the negative zeta potential values, as it is known that, following injection into the blood stream, nanoparticles with a positive zeta potential pose a threat of causing transient embolism and rapid clearance compared to negatively charged particles. In vitro availability isotherms show that the nanoparticles sustained the release of docetaxel for 72 to 120 hours depending on the formulation. The data show that nanotechnology platforms for controlled delivery of bioactive agents can be developed based on the nanoparticles.
Irshad, Rabia; Tahir, Kamran; Li, Baoshan; Ahmad, Aftab; R Siddiqui, Azka; Nazir, Sadia
2017-05-01
A green approach to fabricate nanoparticles has been evolved as a revolutionary discipline. Eco-compatible reaction set ups, use of non-toxic materials and production of highly active biological and photocatalytic products are few benefits of this greener approach. Here, we introduce a green method to synthesize Fe oxide NPs using Punica granatum peel extract. The formation of Fe oxide NPs was optimized using different concentrations of peel extract (20mL, 40mL and 60mL) to achieve small size and better morphology. The results indicate that the FeNPs, obtained using 40mL concentration of peel extract possess the smallest size. The morphology, size and crystallinity of NPs was confirmed by implementing various techniques i.e. UV-Vis spectroscopy, X-ray diffraction, Scanning Electron Microscopy and Electron Diffraction Spectroscopy. The bio-chemicals responsible for reduction and stabilization of FeNPs were confirmed by FT-IR analysis. The biogenic FeNPs were tested for their size dependent antibacterial activity. The biogenic FeNPs prepared in 40mL extract concentrations exhibited strongest antibacterial activity against Pseudomonas aeruginosa i.e. 22 (±0.5) mm than FeNPs with 20mL and 60mL extract concentrations i.e. 18 (±0.4) mm and 14 (±0.3) mm respectively. The optimized FeNPs with 40mL peel extract are not only highly active for ROS generation but also show no hemolytic activity. Thus, FeNPs synthesized using the greener approach are found to have high antibacterial activity along with biocompatibility. This high antibacterial activity can be referred to small size and large surface area. Copyright © 2017 Elsevier B.V. All rights reserved.
Formulation of a dry powder influenza vaccine for nasal delivery.
Garmise, Robert J; Mar, Kevin; Crowder, Timothy M; Hwang, C Robin; Ferriter, Matthew; Huang, Juan; Mikszta, John A; Sullivan, Vincent J; Hickey, Anthony J
2006-03-10
The purpose of this research was to prepare a dry powder vaccine formulation containing whole inactivated influenza virus (WIIV) and a mucoadhesive compound suitable for nasal delivery. Powders containing WIIV and either lactose or trehalose were produced by lyophilization. A micro-ball mill was used to reduce the lyophilized cake to sizes suitable for nasal delivery. Chitosan flakes were reduced in size using a cryo-milling technique. Milled powders were sieved between 45 and 125 microm aggregate sizes and characterized for particle size and distribution, morphology, and flow properties. Powders were blended in the micro-ball mill without the ball. Lyophilization followed by milling produced irregularly shaped, polydisperse particles with a median primary particle diameter of approximately 21 microm and a yield of approximately 37% of particles in the 45 to 125 microm particle size range. Flow properties of lactose and trehalose powders after lyophilization followed by milling and sieving were similar. Cryo-milling produced a small yield of particles in the desired size range (<10%). Lyophilization followed by milling and sieving produced particles suitable for nasal delivery with different physicochemical properties as a function of processing conditions and components of the formulation. Further optimization of particle size and morphology is required for these powders to be suitable for clinical evaluation.
Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.
De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J
1989-01-01
A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.
Maddineni, Sindhuri; Battu, Sunil Kumar; Morott, Joe; Majumdar, Soumyajit; Repka, Michael A.
2014-01-01
The objective of the present study was to develop techniques for an abuse-deterrent (AD) platform utilizing hot melt extrusion (HME) process. Formulation optimization was accomplished by utilizing Box-Behnken design of experiments to determine the effect of the three formulation factors: PolyOx™ WSR301, Benecel™ K15M, and Carbopol 71G; each of which was studied at three levels on TR attributes of the produced melt extruded pellets. A response surface methodology was utilized to identify the optimized formulation. Lidocaine Hydrochloride was used as a model drug, and suitable formulation ingredients were employed as carrier matrices and processing aids. All of the formulations were evaluated for the TR attributes such as particle size post-milling, gelling, percentage of drug extraction in water and alcohol. All of the DOE formulations demonstrated sufficient hardness and elasticity, and could not be reduced into fine particles (<150µm), which is a desirable feature to prevent snorting. In addition, all of the formulations exhibited good gelling tendency in water with minimal extraction of drug in the aqueous medium. Moreover, Benecel™ K15M in combination with PolyOx™ WSR301 could be utilized to produce pellets with TR potential. HME has been demonstrated to be a viable technique with a potential to develop novel abuse-deterrent formulations. PMID:24433429
A global optimization algorithm for protein surface alignment
2010-01-01
Background A relevant problem in drug design is the comparison and recognition of protein binding sites. Binding sites recognition is generally based on geometry often combined with physico-chemical properties of the site since the conformation, size and chemical composition of the protein surface are all relevant for the interaction with a specific ligand. Several matching strategies have been designed for the recognition of protein-ligand binding sites and of protein-protein interfaces but the problem cannot be considered solved. Results In this paper we propose a new method for local structural alignment of protein surfaces based on continuous global optimization techniques. Given the three-dimensional structures of two proteins, the method finds the isometric transformation (rotation plus translation) that best superimposes active regions of two structures. We draw our inspiration from the well-known Iterative Closest Point (ICP) method for three-dimensional (3D) shapes registration. Our main contribution is in the adoption of a controlled random search as a more efficient global optimization approach along with a new dissimilarity measure. The reported computational experience and comparison show viability of the proposed approach. Conclusions Our method performs well to detect similarity in binding sites when this in fact exists. In the future we plan to do a more comprehensive evaluation of the method by considering large datasets of non-redundant proteins and applying a clustering technique to the results of all comparisons to classify binding sites. PMID:20920230
Space Reclamation for Uncoordinated Checkpointing in Message-Passing Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, Yi-Min
1993-01-01
Checkpointing and rollback recovery are techniques that can provide efficient recovery from transient process failures. In a message-passing system, the rollback of a message sender may cause the rollback of the corresponding receiver, and the system needs to roll back to a consistent set of checkpoints called recovery line. If the processes are allowed to take uncoordinated checkpoints, the above rollback propagation may result in the domino effect which prevents recovery line progression. Traditionally, only obsolete checkpoints before the global recovery line can be discarded, and the necessary and sufficient condition for identifying all garbage checkpoints has remained an open problem. A necessary and sufficient condition for achieving optimal garbage collection is derived and it is proved that the number of useful checkpoints is bounded by N(N+1)/2, where N is the number of processes. The approach is based on the maximum-sized antichain model of consistent global checkpoints and the technique of recovery line transformation and decomposition. It is also shown that, for systems requiring message logging to record in-transit messages, the same approach can be used to achieve optimal message log reclamation. As a final topic, a unifying framework is described by considering checkpoint coordination and exploiting piecewise determinism as mechanisms for bounding rollback propagation, and the applicability of the optimal garbage collection algorithm to domino-free recovery protocols is demonstrated.
The Effect of Deposition Conditions on Adhesion Strength of Ti and Ti6Al4V Cold Spray Splats
NASA Astrophysics Data System (ADS)
Goldbaum, Dina; Shockley, J. Michael; Chromik, Richard R.; Rezaeian, Ahmad; Yue, Stephen; Legoux, Jean-Gabriel; Irissou, Eric
2012-03-01
Cold spray is a complex process where many parameters have to be considered in order to achieve optimized material deposition and properties. In the cold spray process, deposition velocity influences the degree of material deformation and material adhesion. While most materials can be easily deposited at relatively low deposition velocity (<700 m/s), this is not the case for high yield strength materials like Ti and its alloys. In the present study, we evaluate the effects of deposition velocity, powder size, particle position in the gas jet, gas temperature, and substrate temperature on the adhesion strength of cold spayed Ti and Ti6Al4V splats. A micromechanical test technique was used to shear individual splats of Ti or Ti6Al4V and measure their adhesion strength. The splats were deposited onto Ti or Ti6Al4V substrates over a range of deposition conditions with either nitrogen or helium as the propelling gas. The splat adhesion testing coupled with microstructural characterization was used to define the strength, the type and the continuity of the bonded interface between splat and substrate material. The results demonstrated that optimization of spray conditions makes it possible to obtain splats with continuous bonding along the splat/substrate interface and measured adhesion strengths approaching the shear strength of bulk material. The parameters shown to improve the splat adhesion included the increase of the splat deposition velocity well above the critical deposition velocity of the tested material, increase in the temperature of both powder and the substrate material, decrease in the powder size, and optimization of the flow dynamics for the cold spray gun nozzle. Through comparisons to the literature, the adhesion strength of Ti splats measured with the splat adhesion technique correlated well with the cohesion strength of Ti coatings deposited under similar conditions and measured with tubular coating tensile (TCT) test.
Automating Structural Analysis of Spacecraft Vehicles
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2004-01-01
A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.
NASA Astrophysics Data System (ADS)
Ödén, Jakob; Toma-Dasu, Iuliana; Yu, Cedric X.; Feigenberg, Steven J.; Regine, William F.; Mutaf, Yildirim D.
2013-07-01
The GammaPod™ device, manufactured by Xcision Medical Systems, is a novel stereotactic breast irradiation device. It consists of a hemispherical source carrier containing 36 Cobalt-60 sources, a tungsten collimator with two built-in collimation sizes, a dynamically controlled patient support table and a breast immobilization cup also functioning as the stereotactic frame for the patient. The dosimetric output of the GammaPod™ was modelled using a Monte Carlo based treatment planning system. For the comparison, three-dimensional (3D) models of commonly used intra-cavitary breast brachytherapy techniques utilizing single lumen and multi-lumen balloon as well as peripheral catheter multi-lumen implant devices were created and corresponding 3D dose calculations were performed using the American Association of Physicists in Medicine Task Group-43 formalism. Dose distributions for clinically relevant target volumes were optimized using dosimetric goals set forth in the National Surgical Adjuvant Breast and Bowel Project Protocol B-39. For clinical scenarios assuming similar target sizes and proximity to critical organs, dose coverage, dose fall-off profiles beyond the target and skin doses at given distances beyond the target were calculated for GammaPod™ and compared with the doses achievable by the brachytherapy techniques. The dosimetric goals within the protocol guidelines were fulfilled for all target sizes and irradiation techniques. For central targets, at small distances from the target edge (up to approximately 1 cm) the brachytherapy techniques generally have a steeper dose fall-off gradient compared to GammaPod™ and at longer distances (more than about 1 cm) the relation is generally observed to be opposite. For targets close to the skin, the relative skin doses were considerably lower for GammaPod™ than for any of the brachytherapy techniques. In conclusion, GammaPod™ allows adequate and more uniform dose coverage to centrally and peripherally located targets with an acceptable dose fall-off and lower relative skin dose than the brachytherapy techniques considered in this study.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Global optimization of cholic acid aggregates
NASA Astrophysics Data System (ADS)
Jójárt, Balázs; Viskolcz, Béla; Poša, Mihalj; Fejer, Szilard N.
2014-04-01
In spite of recent investigations into the potential pharmaceutical importance of bile acids as drug carriers, the structure of bile acid aggregates is largely unknown. Here, we used global optimization techniques to find the lowest energy configurations for clusters composed between 2 and 10 cholate molecules, and evaluated the relative stabilities of the global minima. We found that the energetically most preferred geometries for small aggregates are in fact reverse micellar arrangements, and the classical micellar behaviour (efficient burial of hydrophobic parts) is achieved only in systems containing more than five cholate units. Hydrogen bonding plays a very important part in keeping together the monomers, and among the size range considered, the most stable structure was found to be the decamer, having 17 hydrogen bonds. Molecular dynamics simulations showed that the decamer has the lowest dissociation propensity among the studied aggregation numbers.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
Remote Sensing of Precipitation from Airborne and Spaceborne Radar. Chapter 13
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph
2017-01-01
Weather radar measurements from airborne or satellite platforms can be an effective remote sensing tool for examining the three-dimensional structures of clouds and precipitation. This chapter describes some fundamental properties of radar measurements and their dependence on the particle size distribution (PSD) and radar frequency. The inverse problem of solving for the vertical profile of PSD from a profile of measured reflectivity is stated as an optimal estimation problem for single- and multi-frequency measurements. Phenomena that can change the measured reflectivity Z(sub m) from its intrinsic value Z(sub e), namely attenuation, non-uniform beam filling, and multiple scattering, are described and mitigation of these effects in the context of the optimal estimation framework is discussed. Finally, some techniques involving the use of passive microwave measurements to further constrain the retrieval of the PSD are presented.
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela
2018-02-01
Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.
[Preparation of Oenothera biennis Oil Solid Lipid Nanoparticles Based on Microemulsion Technique].
Piao, Lin-mei; Jin, Yong; Cui, Yan-lin; Yin, Shou-yu
2015-06-01
To study the preparation of Oenothera biennis oil solid lipid nanoparticles and its quality evaluation. The solid lipid nanoparticles were prepared by microemulsion technique. The optimum condition was performed based on the orthogonal design to examine the entrapment efficiency, the mean diameter of the particles and so on. The optimal preparation of Oenothera biennis oil solid lipid nanoparticles was as follows: Oenothera biennis dosage 300 mg, glycerol monostearate-Oenothera biennis (2: 3), Oenothera biennis -RH/40/PEG-400 (1: 2), RH-40/PEG-400 (1: 2). The resulting nanoparticles average encapsulation efficiency was (89.89 ± 0.71)%, the average particle size was 44.43 ± 0.08 nm, and the Zeta potential was 64.72 ± 1.24 mV. The preparation process is simple, stable and feasible.
A Simulation-Optimization Model for the Management of Seawater Intrusion
NASA Astrophysics Data System (ADS)
Stanko, Z.; Nishikawa, T.
2012-12-01
Seawater intrusion is a common problem in coastal aquifers where excessive groundwater pumping can lead to chloride contamination of a freshwater resource. Simulation-optimization techniques have been developed to determine optimal management strategies while mitigating seawater intrusion. The simulation models are often density-independent groundwater-flow models that may assume a sharp interface and/or use equivalent freshwater heads. The optimization methods are often linear-programming (LP) based techniques that that require simplifications of the real-world system. However, seawater intrusion is a highly nonlinear, density-dependent flow and transport problem, which requires the use of nonlinear-programming (NLP) or global-optimization (GO) techniques. NLP approaches are difficult because of the need for gradient information; therefore, we have chosen a GO technique for this study. Specifically, we have coupled a multi-objective genetic algorithm (GA) with a density-dependent groundwater-flow and transport model to simulate and identify strategies that optimally manage seawater intrusion. GA is a heuristic approach, often chosen when seeking optimal solutions to highly complex and nonlinear problems where LP or NLP methods cannot be applied. The GA utilized in this study is the Epsilon-Nondominated Sorted Genetic Algorithm II (ɛ-NSGAII), which can approximate a pareto-optimal front between competing objectives. This algorithm has several key features: real and/or binary variable capabilities; an efficient sorting scheme; preservation and diversity of good solutions; dynamic population sizing; constraint handling; parallelizable implementation; and user controlled precision for each objective. The simulation model is SEAWAT, the USGS model that couples MODFLOW with MT3DMS for variable-density flow and transport. ɛ-NSGAII and SEAWAT were efficiently linked together through a C-Fortran interface. The simulation-optimization model was first tested by using a published density-independent flow model test case that was originally solved using a sequential LP method with the USGS's Ground-Water Management Process (GWM). For the problem formulation, the objective is to maximize net groundwater extraction, subject to head and head-gradient constraints. The decision variables are pumping rates at fixed wells and the system's state is represented with freshwater hydraulic head. The results of the proposed algorithm were similar to the published results (within 1%); discrepancies may be attributed to differences in the simulators and inherent differences between LP and GA. The GWM test case was then extended to a density-dependent flow and transport version. As formulated, the optimization problem is infeasible because of the density effects on hydraulic head. Therefore, the sum of the squared constraint violation (SSC) was used as a second objective. The result is a pareto curve showing optimal pumping rates versus the SSC. Analysis of this curve indicates that a similar net-extraction rate to the test case can be obtained with a minor violation in vertical head-gradient constraints. This study shows that a coupled ɛ-NSGAII/SEAWAT model can be used for the management of groundwater seawater intrusion. In the future, the proposed methodology will be applied to a real-world seawater intrusion and resource management problem for Santa Barbara, CA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahbaee, P; Zhang, Y; Solomon, J
Purpose: To substantiate the interdependency of contrast dose, radiation dose, and image quality in CT towards the patient- specific optimization of the imaging protocols Methods: The study deployed two phantom platforms. A variable sized (12, 18, 23, 30, 37 cm) phantom (Mercury-3.0) containing an iodinated insert (8.5 mgI/ml) was imaged on a representative CT scanner at multiple CTDI values (0.7–22.6 mGy). The contrast and noise were measured from the reconstructed images for each phantom diameter. Linearly related to iodine-concentration, contrast-to-noise ratio (CNR), were calculated for 16 iodine-concentration levels (0–8.5 mgI/ml). The analysis was extended to a recently developed suit ofmore » 58 virtual human models (5D XCAT) with added contrast dynamics. Emulating a contrast-enhanced abdominal image procedure and targeting a peak-enhancement in aorta, each XCAT phantom was “imaged” using a simulation platform (CatSim, GE). 3D surfaces for each patient/size established the relationship between iodine-concentration, dose, and CNR. The ratios of change in iodine-concentration versus dose (IDR) to yield a constant change in CNR were calculated for each patient size. Results: Mercury phantom results show the image-quality size- dependence on CTDI and IC levels. For desired image-quality values, the iso-contour-lines reflect the trade off between contrast-material and radiation doses. For a fixed iodine-concentration (4 mgI/mL), the IDR values for low (1.4 mGy) and high (11.5 mGy) dose levels were 1.02, 1.07, 1.19, 1.65, 1.54, and 3.14, 3.12, 3.52, 3.76, 4.06, respectively across five sizes. The simulation data from XCAT models confirmed the empirical results from Mercury phantom. Conclusion: The iodine-concentration, image quality, and radiation dose are interdependent. The understanding of the relationships between iodine-concentration, image quality, and radiation dose will allow for a more comprehensive optimization of CT imaging devices and techniques, providing the methodology to balance iodine-concentration and dose based on patient’s attributes.« less
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate. PMID:27555765
BMP analysis system for watershed-based stormwater management.
Zhen, Jenny; Shoemaker, Leslie; Riverson, John; Alvi, Khalid; Cheng, Mow-Soung
2006-01-01
Best Management Practices (BMPs) are measures for mitigating nonpoint source (NPS) pollution caused mainly by stormwater runoff. Established urban and newly developing areas must develop cost effective means for restoring or minimizing impacts, and planning future growth. Prince George's County in Maryland, USA, a fast-growing region in the Washington, DC metropolitan area, has developed a number of tools to support analysis and decision making for stormwater management planning and design at the watershed level. These tools support watershed analysis, innovative BMPs, and optimization. Application of these tools can help achieve environmental goals and lead to significant cost savings. This project includes software development that utilizes GIS information and technology, integrates BMP processes simulation models, and applies system optimization techniques for BMP planning and selection. The system employs the ESRI ArcGIS as the platform, and provides GIS-based visualization and support for developing networks including sequences of land uses, BMPs, and stream reaches. The system also provides interfaces for BMP placement, BMP attribute data input, and decision optimization management. The system includes a stand-alone BMP simulation and evaluation module, which complements both research and regulatory nonpoint source control assessment efforts, and allows flexibility in the examining various BMP design alternatives. Process based simulation of BMPs provides a technique that is sensitive to local climate and rainfall patterns. The system incorporates a meta-heuristic optimization technique to find the most cost-effective BMP placement and implementation plan given a control target, or a fixed cost. A case study is presented to demonstrate the application of the Prince George's County system. The case study involves a highly urbanized area in the Anacostia River (a tributary to Potomac River) watershed southeast of Washington, DC. An innovative system of management practices is proposed to minimize runoff, improve water quality, and provide water reuse opportunities. Proposed management techniques include bioretention, green roof, and rooftop runoff collection (rain barrel) systems. The modeling system was used to identify the most cost-effective combinations of management practices to help minimize frequency and size of runoff events and resulting combined sewer overflows to the Anacostia River.
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate.
Optimal deployment of thermal energy storage under diverse economic and climate conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael
2014-04-01
This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less
Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation
NASA Technical Reports Server (NTRS)
1972-01-01
The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
NASA Astrophysics Data System (ADS)
Franchin, A.; Downard, A. J.; Kangasluoma, J.; Nieminen, T.; Lehtipalo, K.; Steiner, G.; Manninen, H. E.; Petäjä, T.; Flagan, R. C.; Kulmala, M.
2015-06-01
Reliable and reproducible measurements of atmospheric aerosol particle number size distributions below 10 nm require optimized classification instruments with high particle transmission efficiency. Almost all DMAs have an unfavorable potential gradient at the outlet (e.g. long column, Vienna type) or at the inlet (nano-radial DMA). This feature prevents them from achieving a good transmission efficiency for the smallest nanoparticles. We developed a new high transmission inlet for the Caltech nano-radial DMA (nRDMA) that increases the transmission efficiency to 12 % for ions as small as 1.3 nm in mobility equivalent diameter (corresponding to 1.2 × 10-4 m2 V-1 s-1 in electrical mobility). We successfully deployed the nRDMA, equipped with the new inlet, in chamber measurements, using a Particle Size Magnifier (PSM) and a booster Condensation Particle Counter (CPC) as a counter. With this setup, we were able to measure size distributions of ions between 1.3 and 6 nm, corresponding to a mobility range from 1.2 × 10-4 to 5.8 × 10-6 m2 V-1 s-1. The system was modeled, tested in the laboratory and used to measure negative ions at ambient concentrations in the CLOUD 7 measurement campaign at CERN. We achieved a higher size resolution than techniques currently used in field measurements, and maintained a good transmission efficiency at moderate inlet and sheath air flows (2.5 and 30 LPM, respectively). In this paper, by measuring size distribution at high size resolution down to 1.3 nm, we extend the limit of the current technology. The current setup is limited to ion measurements. However, we envision that future research focused on the charging mechanisms could extend the technique to measure neutral aerosol particles as well, so that it will be possible to measure size distributions of ambient aerosols from 1 nm to 1 μm.
Rangus, Mojca; Mazaj, Matjaž; Dražić, Goran; Popova, Margarita; Tušar, Nataša Novak
2014-01-01
Iron-functionalized disordered mesoporous silica (FeKIL-2) is a promising, environmentally friendly, cost-effective and highly efficient catalyst for the elimination of volatile organic compounds (VOCs) from polluted air via catalytic oxidation. In this study, we investigated the type of catalytically active iron sites for different iron concentrations in FeKIL-2 catalysts using advanced characterization of the local environment of iron atoms by a combination of X-ray Absorption Spectroscopy Techniques (XANES, EXAFS) and Atomic-Resolution Scanning Transmission Electron Microscopy (AR STEM). We found that the molar ratio Fe/Si ≤ 0.01 leads to the formation of stable, mostly isolated Fe3+ sites in the silica matrix, while higher iron content Fe/Si > 0.01 leads to the formation of oligonuclear iron clusters. STEM imaging and EELS techniques confirmed the existence of these clusters. Their size ranges from one to a few nanometers, and they are unevenly distributed throughout the material. The size of the clusters was also found to be similar, regardless of the nominal concentration of iron (Fe/Si = 0.02 and Fe/Si = 0.05). From the results obtained from sample characterization and model catalytic tests, we established that the enhanced activity of FeKIL-2 with the optimal Fe/Si = 0.01 ratio can be attributed to: (1) the optimal concentration of stable isolated Fe3+ in the silica support; and (2) accelerated diffusion of the reactants in disordered mesoporous silica (FeKIL-2) when compared to ordered mesoporous silica materials (FeSBA-15, FeMCM-41). PMID:28788674
Organizational Decision Making
1975-08-01
the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2010-01-01
Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.
Usage of CO2 microbubbles as flow-tracing contrast media in X-ray dynamic imaging of blood flows.
Lee, Sang Joon; Park, Han Wook; Jung, Sung Yong
2014-09-01
X-ray imaging techniques have been employed to visualize various biofluid flow phenomena in a non-destructive manner. X-ray particle image velocimetry (PIV) was developed to measure velocity fields of blood flows to obtain hemodynamic information. A time-resolved X-ray PIV technique that is capable of measuring the velocity fields of blood flows under real physiological conditions was recently developed. However, technical limitations still remained in the measurement of blood flows with high image contrast and sufficient biocapability. In this study, CO2 microbubbles as flow-tracing contrast media for X-ray PIV measurements of biofluid flows was developed. Human serum albumin and CO2 gas were mechanically agitated to fabricate CO2 microbubbles. The optimal fabricating conditions of CO2 microbubbles were found by comparing the size and amount of microbubbles fabricated under various operating conditions. The average size and quantity of CO2 microbubbles were measured by using a synchrotron X-ray imaging technique with a high spatial resolution. The quantity and size of the fabricated microbubbles decrease with increasing speed and operation time of the mechanical agitation. The feasibility of CO2 microbubbles as a flow-tracing contrast media was checked for a 40% hematocrit blood flow. Particle images of the blood flow were consecutively captured by the time-resolved X-ray PIV system to obtain velocity field information of the flow. The experimental results were compared with a theoretically amassed velocity profile. Results show that the CO2 microbubbles can be used as effective flow-tracing contrast media in X-ray PIV experiments.
Electromechanical characterization of individual micron-sized metal coated polymer particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazilchuk, Molly; Kristiansen, Helge; Conpart AS, Skjetten 2013
Micron-sized polymer particles with nanoscale metal coatings are essential in conductive adhesives for electronics assembly. The particles function in a compressed state in the adhesives. The link between mechanical properties and electrical conductivity is thus of the utmost importance in the formation of good electrical contact. A custom flat punch set-up based on nanoindentation has been developed to simultaneously deform and electrically probe individual particles. The set-up has a sufficiently low internal resistance to allow the measurement of sub-Ohm contact resistances. Additionally, the set-up can capture mechanical failure of the particles. Combining this data yields a fundamental understanding of contactmore » behavior. We demonstrate that this method can clearly distinguish between particles of different sizes, with different thicknesses of metal coating, and different metallization schemes. The technique provides good repeatability and physical insight into the behavior of these particles that can guide adhesive design and the optimization of bonding processes.« less
Hydrothermal Synthesis of Hydroxyapatite Nanorods for Rapid Formation of Bone-Like Mineralization
NASA Astrophysics Data System (ADS)
Hoai, Tran Thanh; Nga, Nguyen Kim; Giang, Luu Truong; Huy, Tran Quang; Tuan, Phan Nguyen Minh; Binh, Bui Thi Thanh
2017-08-01
Hydroxyapatite (HAp) is an excellent biomaterial for bone repair and regeneration. The biological functions of HAp particles, such as biomineralization, cell adhesion, and cell proliferation, can be enhanced when their size is reduced to the nanoscale. In this work, HAp nanoparticles were synthesized by the hydrothermal technique with addition of cetyltrimethylammonium bromide (CTAB). These particles were also characterized, and their size controlled by modifying the CTAB concentration and hydrothermal duration. The results show that most HAp nanoparticles were rod-like in shape, exhibiting the most uniform and smallest size (mean diameter and length of 39 nm and 125 nm, respectively) at optimal conditions of 0.64 g CTAB and hydrothermal duration of 12 h. Moreover, good biomineralization capability of the HAp nanorods was confirmed through in vitro tests in simulated body fluid. A bone-like mineral layer of synthesized HAp nanorods formed rapidly after 7 days. This study shows that highly bioactive HAp nanorods can be easily prepared by the hydrothermal method, being a potential nanomaterial for bone regeneration.
Toropova, Alla P; Toropov, Andrey A; Benfenati, Emilio; Puzyn, Tomasz; Leszczynska, Danuta; Leszczynski, Jerzy
2014-10-01
The development of quantitative structure-activity relationships for nanomaterials needs representation of molecular structure of extremely complex molecular systems. Obviously, various characteristics of nanomaterial could impact associated biochemical endpoints. Following features of TiO2 and ZnO nanoparticles (n=42) are considered here: (i) engineered size (nm); (ii) size in water suspension (nm); (iii) size in phosphate buffered saline (PBS, nm); (iv) concentration (mg/L); and (v) zeta potential (mV). The damage to cellular membranes (units/L) is selected as an endpoint. Quantitative features-activity relationships (QFARs) are calculated by the Monte Carlo technique for three distributions of data representing values associated with membrane damage into the training and validation sets. The obtained models are characterized by the following average statistics: 0.78
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
Basheti, Iman A; Reddel, Helen K; Armour, Carol L; Bosnic-Anticevich, Sinthia Z
2005-05-01
Optimal effects of asthma medications are dependent on correct inhaler technique. In a telephone survey, 77/87 patients reported that their Turbuhaler technique had not been checked by a health care professional. In a subsequent pilot study, 26 patients were randomized to receive one of 3 Turbuhaler counseling techniques, administered in the community pharmacy. Turbuhaler technique was scored before and 2 weeks after counseling (optimal technique = score 9/9). At baseline, 0/26 patients had optimal technique. After 2 weeks, optimal technique was achieved by 0/7 patients receiving standard verbal counseling (A), 2/8 receiving verbal counseling augmented with emphasis on Turbuhaler position during priming (B), and 7/9 receiving augmented verbal counseling plus physical demonstration (C) (Fisher's exact test for A vs C, p = 0.006). Satisfactory technique (4 essential steps correct) also improved (A: 3/8 to 4/7; B: 2/9 to 5/8; and C: 1/9 to 9/9 patients) (A vs C, p = 0.1). Counseling in Turbuhaler use represents an important opportunity for community pharmacists to improve asthma management, but physical demonstration appears to be an important component to effective Turbuhaler training for educating patients toward optimal Turbuhaler technique.
Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Hug, Gabriela; Li, Xin
Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
NASA Astrophysics Data System (ADS)
Gunes, Ersin Fatih
Turkey is located between Europe, which has increasing demand for natural gas and the geographies of Middle East, Asia and Russia, which have rich and strong natural gas supply. Because of the geographical location, Turkey has strategic importance according to energy sources. To supply this demand, a pipeline network configuration with the optimal and efficient lengths, pressures, diameters and number of compressor stations is extremely needed. Because, Turkey has a currently working and constructed network topology, obtaining an optimal configuration of the pipelines, including an optimal number of compressor stations with optimal locations, is the focus of this study. Identifying a network design with lowest costs is important because of the high maintenance and set-up costs. The quantity of compressor stations, the pipeline segments' lengths, the diameter sizes and pressures at compressor stations, are considered to be decision variables in this study. Two existing optimization models were selected and applied to the case study of Turkey. Because of the fixed cost of investment, both models are formulated as mixed integer nonlinear programs, which require branch and bound combined with the nonlinear programming solution methods. The differences between these two models are related to some factors that can affect the network system of natural gas such as wall thickness, material balance compressor isentropic head and amount of gas to be delivered. The results obtained by these two techniques are compared with each other and with the current system. Major differences between results are costs, pressures and flow rates. These solution techniques are able to find a solution with minimum cost for each model both of which are less than the current cost of the system while satisfying all the constraints on diameter, length, flow rate and pressure. These results give the big picture of an ideal configuration for the future state network for the country of Turkey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guillermin, M.; Colombier, J. P.; Audouard, E.
2010-07-15
With an interest in pulsed laser deposition and remote spectroscopy techniques, we explore here the potential of laser pulses temporally tailored on ultrafast time scales to control the expansion and the excitation degree of various ablation products including atomic species and nanoparticulates. Taking advantage of automated pulse-shaping techniques, an adaptive procedure based on spectroscopic feedback is applied to regulate the irradiance and enhance the optical emission of monocharged aluminum ions with respect to the neutral signal. This leads to optimized pulses usually consisting in a series of femtosecond peaks distributed on a longer picosecond sequence. The ablation features induced bymore » the optimized pulse are compared with those determined by picosecond pulses generated by imposed second-order dispersion or by double pulse sequences with adjustable picosecond separation. This allows to analyze the influence of fast- and slow-varying envelope features on the material heating and the resulting plasma excitation degree. Using various optimal pulse forms including designed asymmetric shapes, we analyze the establishment of surface pre-excitation that enables conditions of enhanced radiation coupling. Thin films elaborated by unshaped femtosecond laser pulses and by optimized, stretched, or double pulse sequences are compared, indicating that the nanoparticles generation efficiency is strongly influenced by the temporal shaping of the laser irradiation. A thermodynamic scenario involving supercritical heating is proposed to explain enhanced ionization rates and lower particulates density for optimal pulses. Numerical one-dimensional hydrodynamic simulations for the excited matter support the interpretation of the experimental results in terms of relative efficiency of various relaxation paths for excited matter above or below the thermodynamic stability limits. The calculation results underline the role of the temperature and density gradients along the ablated plasma plume which lead to the spatial distinct locations of excited species. Moreover, the nanoparticles sizes are computed based on liquid layer ejection followed by a Rayleigh and Taylor instability decomposition, in good agreement with the experimental findings.« less
Operations research applications in nuclear energy
NASA Astrophysics Data System (ADS)
Johnson, Benjamin Lloyd
This dissertation consists of three papers; the first is published in Annals of Operations Research, the second is nearing submission to INFORMS Journal on Computing, and the third is the predecessor of a paper nearing submission to Progress in Nuclear Energy. We apply operations research techniques to nuclear waste disposal and nuclear safeguards. Although these fields are different, they allow us to showcase some benefits of using operations research techniques to enhance nuclear energy applications. The first paper, "Optimizing High-Level Nuclear Waste Disposal within a Deep Geologic Repository," presents a mixed-integer programming model that determines where to place high-level nuclear waste packages in a deep geologic repository to minimize heat load concentration. We develop a heuristic that increases the size of solvable model instances. The second paper, "Optimally Configuring a Measurement System to Detect Diversions from a Nuclear Fuel Cycle," introduces a simulation-optimization algorithm and an integer-programming model to find the best, or near-best, resource-limited nuclear fuel cycle measurement system with a high degree of confidence. Given location-dependent measurement method precisions, we (i) optimize the configuration of n methods at n locations of a hypothetical nuclear fuel cycle facility, (ii) find the most important location at which to improve method precision, and (iii) determine the effect of measurement frequency on near-optimal configurations and objective values. Our results correspond to existing outcomes but we obtain them at least an order of magnitude faster. The third paper, "Optimizing Nuclear Material Control and Accountability Measurement Systems," extends the integer program from the second paper to locate measurement methods in a larger, hypothetical nuclear fuel cycle scenario given fixed purchase and utilization budgets. This paper also presents two mixed-integer quadratic programming models to increase the precision of existing methods given a fixed improvement budget and to reduce the measurement uncertainty in the system while limiting improvement costs. We quickly obtain similar or better solutions compared to several intuitive analyses that take much longer to perform.
Paudel, Anjan; Ameeduzzafar; Imam, Syed Sarim; Fazil, Mohd; Khan, Shahroz; Hafeez, Abdul; Ahmad, Farhan Jalees; Ali, Asgar
2017-01-01
The objective of this study was to formulate and optimize Candesartan Cilexetil (CC) loaded nanostructured lipid carriers (NLCs) for enhanced oral bioavailability. Glycerol monostearate (GMS), Oleic acid, Tween 80 and Span 40 were selected as a solid lipid, liquid lipid, surfactant and co- surfactant, respectively. The CC-NLCs were prepared by hot emulsion probe sonication technique and optimized using experimental design approach. The formulated CC-NLCs were evaluated for various physicochemical parameters and further optimized formulation (CC-NLC-Opt) was assessed for in vivo pharmacokinetic and pharmacodynamic activity. The optimized formulation (CC-NLC-Opt) showed particle size (183.5±5.89nm), PDI (0.228±0.13), zeta potential (-28.2±0.99mV), and entrapment efficiency (88.9±3.69%). The comparative in vitro release study revealed that CC-NLC-Opt showed significantly better (p<0.05) release and enhanced permeation as compared to CC-suspension. The in vivo pharmacokinetic study gave many folds increase in oral bioavailability than CC suspension, which was further confirmed by antihypertensive activity in a murine model. Thus, the results of ex vivo permeation, pharmacokinetic study and pharmacodynamics study suggest the potential of CC-NLCs for improved oral delivery. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Mangal, S. K.; Sharma, Vivek
2018-02-01
Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.
Optimal placement of tuning masses for vibration reduction in helicopter rotor blades
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1988-01-01
Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.
Rizwanullah, Md; Amin, Saima; Ahmad, Javed
2017-01-01
In the present study, rosuvastatin calcium-loaded nanostructured lipid carriers were developed and optimized for improved efficacy. The ROS-Ca-loaded NLC was prepared using melt emulsification ultrasonication technique and optimized by Box-Behnken statistical design. The optimized NLC composed of glyceryl monostearate (solid lipid) and capmul MCM EP (liquid lipid) as lipid phase (3% w/v), poloxamer 188 (1%) and tween 80 (1%) as surfactant. The mean particle size, polydispersity index (PDI), zeta potential (ζ) and entrapment efficiency (%) of optimized NLC formulation was observed to be 150.3 ± 4.67 nm, 0.175 ± 0.022, -32.9 ± 1.36 mV and 84.95 ± 5.63%, respectively. NLC formulation showed better in vitro release in simulated intestinal fluid (pH 6.8) than API suspension. Confocal laser scanning showed deeper permeation of formulation across rat intestine compared to rhodamine B dye solution. Pharmacokinetic study on female albino Wistar rats showed 5.4-fold increase in relative bioavailability with NLC compared to API suspension. Optimized NLC formulation also showed significant (p < 0.01) lipid lowering effect in hyperlipidemic rats. Therefore, NLC represents a great potential for improved efficacy of ROS-Ca after oral administration.
Fares, Ahmed R; ElMeshad, Aliaa N; Kassem, Mohamed A A
2018-11-01
This study aims at preparing and optimizing lacidipine (LCDP) polymeric micelles using thin film hydration technique in order to overcome LCDP solubility-limited oral bioavailability. A two-factor three-level central composite face-centered design (CCFD) was employed to optimize the formulation variables to obtain LCDP polymeric micelles of high entrapment efficiency and small and uniform particle size (PS). Formulation variables were: Pluronic to drug ratio (A) and Pluronic P123 percentage (B). LCDP polymeric micelles were assessed for entrapment efficiency (EE%), PS and polydispersity index (PDI). The formula with the highest desirability (0.959) was chosen as the optimized formula. The values of the formulation variables (A and B) in the optimized polymeric micelles formula were 45% and 80%, respectively. Optimum LCDP polymeric micelles had entrapment efficiency of 99.23%, PS of 21.08 nm and PDI of 0.11. Optimum LCDP polymeric micelles formula was physically characterized using transmission electron microscopy. LCDP polymeric micelles showed saturation solubility approximately 450 times that of raw LCDP in addition to significantly enhanced dissolution rate. Bioavailability study of optimum LCDP polymeric micelles formula in rabbits revealed a 6.85-fold increase in LCDP bioavailability compared to LCDP oral suspension.
A chaos wolf optimization algorithm with self-adaptive variable step-size
NASA Astrophysics Data System (ADS)
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
Comparing kinetic curves in liquid chromatography
NASA Astrophysics Data System (ADS)
Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.
2017-01-01
Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.
NASA Technical Reports Server (NTRS)
Wincheski, Buzz; Williams, Phillip; Simpson, John
2007-01-01
The use of eddy current techniques for the detection of outer diameter damage in tubing and many complex aerospace structures often requires the use of an inner diameter probe due to a lack of access to the outside of the part. In small bore structures the probe size and orientation are constrained by the inner diameter of the part, complicating the optimization of the inspection technique. Detection of flaws through a significant remaining wall thickness becomes limited not only by the standard depth of penetration, but also geometrical aspects of the probe. Recently, an orthogonal eddy current probe was developed for detection of such flaws in Space Shuttle Primary Reaction Control System (PRCS) Thrusters. In this case, the detection of deeply buried stress corrosion cracking by an inner diameter eddy current probe was sought. Probe optimization was performed based upon the limiting spatial dimensions, flaw orientation, and required detection sensitivity. Analysis of the probe/flaw interaction was performed through the use of finite and boundary element modeling techniques. Experimental data for the flaw detection capabilities, including a probability of detection study, will be presented along with the simulation data. The results of this work have led to the successful deployment of an inspection system for the detection of stress corrosion cracking in Space Shuttle Primary Reaction Control System (PRCS) Thrusters.