Meta-control of combustion performance with a data mining approach
NASA Astrophysics Data System (ADS)
Song, Zhe
Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.
Jiang, Zheng; Wang, Hong; Wu, Qi-nan
2015-06-01
To optimize the processing of polysaccharide extraction from Spirodela polyrrhiza. Five factors related to extraction rate of polysaccharide were optimized by the Plackett-Burman design. Based on this study, three factors, including alcohol volume fraction, extraction temperature and ratio of material to liquid, were regarded as investigation factors by Box-Behnken response surface methodology. The effect order of three factors on the extraction rate of polysaccharide from Spirodela polyrrhiza were as follows: extraction temperature, alcohol volume fraction,ratio of material to liquid. According to Box-Behnken response, the best extraction conditions were: alcohol volume fraction of 81%, ratio of material to liquid of 1:42, extraction temperature of 100 degrees C, extraction time of 60 min for four times. Plackett-Burman design and Box-Behnken response surface methodology used to optimize the extraction process for the polysaccharide in this study is effective and stable.
Matias-Guiu, Pau; Rodríguez-Bencomo, Juan José; Pérez-Correa, José R; López, Francisco
2018-04-15
Developing new distillation strategies can help the spirits industry to improve quality, safety and process efficiency. Batch stills equipped with a packed column and an internal partial condenser are an innovative experimental system, allowing a fast and flexible management of the rectification. In this study, the impact of four factors (heart-cut volume, head-cut volume, pH and cooling flow rate of the internal partial condenser during the head-cut fraction) on 18 major volatile compounds of Muscat spirits was optimized using response surface methodology and desirability function approaches. Results have shown that high rectification at the beginning of the heart-cut enhances the overall positive aroma compounds of the product, reducing off-flavor compounds. In contrast, optimum levels of heart-cut volume, head-cut volume and pH factors varied depending on the process goal. Finally, three optimal operational conditions (head off-flavors reduction, flowery terpenic enhancement and fruity ester enhancement) were evaluated by chemical and sensory analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Optimization of the performance of the polymerase chain reaction in silicon-based microstructures.
Taylor, T B; Winn-Deen, E S; Picozza, E; Woudenberg, T M; Albin, M
1997-01-01
We have demonstrated the ability to perform real-time homogeneous, sequence specific detection of PCR products in silicon microstructures. Optimal design/ processing result in equivalent performance (yield and specificity) for high surface-to-volume silicon structures as compared to larger volume reactions in polypropylene tubes. Amplifications in volumes as small as 0.5 microl and thermal cycling times reduced as much as 5-fold from that of conventional systems have been demonstrated for the microstructures. PMID:9224619
Simulation and optimization of volume holographic imaging systems in Zemax.
Wissmann, Patrick; Oh, Se Baek; Barbastathis, George
2008-05-12
We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.
A practical and sensitive method to assess volatile organic compounds (VOCs) from JP-8 jet fuel in human whole blood was developed by modifying previously established liquid-liquid extraction procedures, optimizing extraction times, solvent volume, specific sample processing te...
Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu
2017-06-15
In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.
Carro, N; García, I; Ignacio, M-C; Llompart, M; Yebra, M-C; Mouteira, A
2002-10-01
A sample-preparation procedure (extraction and saponification) using microwave energy is proposed for determination of organochlorine pesticides in oyster samples. A Plackett-Burman factorial design has been used to optimize the microwave-assisted extraction and mild saponification on a freeze dried sample spiked with a mixture of aldrin, endrin, dieldrin, heptachlor, heptachorepoxide, isodrin, transnonachlor, p, p'-DDE, and p, p'-DDD. Six variables: solvent volume, extraction time, extraction temperature, amount of acetone (%) in the extractant solvent, amount of sample, and volume of NaOH solution were considered in the optimization process. The results show that the amount of sample is statistically significant for dieldrin, aldrin, p, p'-DDE, heptachlor, and transnonachlor and solvent volume for dieldrin, aldrin, and p, p'-DDE. The volume of NaOH solution is statistically significant for aldrin and p, p'-DDE only. Extraction temperature and extraction time seem to be the main factors determining the efficiency of extraction process for isodrin and p, p'-DDE, respectively. The optimized procedure was compared with conventional Soxhlet extraction.
Bioleaching of nickel from spent petroleum catalyst using Acidithiobacillus thiooxidans DSM- 11478.
Sharma, Mohita; Bisht, Varsha; Singh, Bina; Jain, Pratiksha; Mandal, Ajoy K; Lal, Banwari; Sarma, Priyangshu M
2015-06-01
The present work deals with optimization of culture conditions and process parameters for bioleaching of spent petroleum catalyst collected from a petroleum refinery. The efficacy of Ni bioleaching from spent petroleum catalyst was determined using pure culture of Acidithiobacillus thiooxidans DSM- 11478. The culture conditions of pH, temperature and headspace volume to media volume ratio were optimized. EDX analysis was done to confirm the presence of Ni in the spent catalyst after roasting it to decoke its surface. The optimum temperature for A. thiooxidans DSM-11478 growth was found to be 32 degrees C. The enhanced recovery of nickel at very low pH was attributed to the higher acidic strength of sulfuric acid produced in the culture medium by the bacterium. During the bioleaching process, 89% of the Ni present in the catalyst waste could be successfully recovered in optimized conditions. This environment friendly bioleaching process proved efficient than the chemical method. Taking leads from the lab scale results, bioleaching in larger volumes (1, 5 and 10 L) was also performed to provide guidelines for taking up this technology for in situ industrial waste management.
Dose-mass inverse optimization for minimally moving thoracic lesions
NASA Astrophysics Data System (ADS)
Mihaylov, I. B.; Moros, E. G.
2015-05-01
In the past decade, several different radiotherapy treatment plan evaluation and optimization schemes have been proposed as viable approaches, aiming for dose escalation or an increase of healthy tissue sparing. In particular, it has been argued that dose-mass plan evaluation and treatment plan optimization might be viable alternatives to the standard of care, which is realized through dose-volume evaluation and optimization. The purpose of this investigation is to apply dose-mass optimization to a cohort of lung cancer patients and compare the achievable healthy tissue sparing to that one achievable through dose-volume optimization. Fourteen non-small cell lung cancer (NSCLC) patient plans were studied retrospectively. The range of tumor motion was less than 0.5 cm and motion management in the treatment planning process was not considered. For each case, dose-volume (DV)-based and dose-mass (DM)-based optimization was performed. Nine-field step-and-shoot IMRT was used, with all of the optimization parameters kept the same between DV and DM optimizations. Commonly used dosimetric indices (DIs) such as dose to 1% the spinal cord volume, dose to 50% of the esophageal volume, and doses to 20 and 30% of healthy lung volumes were used for cross-comparison. Similarly, mass-based indices (MIs), such as doses to 20 and 30% of healthy lung masses, 1% of spinal cord mass, and 33% of heart mass, were also tallied. Statistical equivalence tests were performed to quantify the findings for the entire patient cohort. Both DV and DM plans for each case were normalized such that 95% of the planning target volume received the prescribed dose. DM optimization resulted in more organs at risk (OAR) sparing than DV optimization. The average sparing of cord, heart, and esophagus was 23, 4, and 6%, respectively. For the majority of the DIs, DM optimization resulted in lower lung doses. On average, the doses to 20 and 30% of healthy lung were lower by approximately 3 and 4%, whereas lung volumes receiving 2000 and 3000 cGy were lower by 3 and 2%, respectively. The behavior of MIs was very similar. The statistical analyses of the results again indicated better healthy anatomical structure sparing with DM optimization. The presented findings indicate that dose-mass-based optimization results in statistically significant OAR sparing as compared to dose-volume-based optimization for NSCLC. However, the sparing is case-dependent and it is not observed for all tallied dosimetric endpoints.
Closed-Loop Multitarget Optimization for Discovery of New Emulsion Polymerization Recipes
2015-01-01
Self-optimization of chemical reactions enables faster optimization of reaction conditions or discovery of molecules with required target properties. The technology of self-optimization has been expanded to discovery of new process recipes for manufacture of complex functional products. A new machine-learning algorithm, specifically designed for multiobjective target optimization with an explicit aim to minimize the number of “expensive” experiments, guides the discovery process. This “black-box” approach assumes no a priori knowledge of chemical system and hence particularly suited to rapid development of processes to manufacture specialist low-volume, high-value products. The approach was demonstrated in discovery of process recipes for a semibatch emulsion copolymerization, targeting a specific particle size and full conversion. PMID:26435638
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
High-productivity DRIE solutions for 3D-SiP and MEMS volume manufacturing
NASA Astrophysics Data System (ADS)
Puech, M.; Thevenoud, J. M.; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, J. M.
2006-12-01
Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimized to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters have shown ultra high silicon etch rate, with unrivaled uniformity and repeatability leading to excellent process yields. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. A key factor for achieving the highest performances was the recognized expertise of Alcatel vacuum and plasma science technologies. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer head and Silicon microphones.
Collecting conditions usage metadata to optimize current and future ATLAS software and processing
NASA Astrophysics Data System (ADS)
Rinaldi, L.; Barberis, D.; Formica, A.; Gallas, E. J.; Oda, S.; Rybkin, G.; Verducci, M.; ATLAS Collaboration
2017-10-01
Conditions data (for example: alignment, calibration, data quality) are used extensively in the processing of real and simulated data in ATLAS. The volume and variety of the conditions data needed by different types of processing are quite diverse, so optimizing its access requires a careful understanding of conditions usage patterns. These patterns can be quantified by mining representative log files from each type of processing and gathering detailed information about conditions usage for that type of processing into a central repository.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Fujimoto, R
Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less
Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers
NASA Astrophysics Data System (ADS)
Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.
A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.
Design, simulation, and optimization of an RGB polarization independent transmission volume hologram
NASA Astrophysics Data System (ADS)
Mahamat, Adoum Hassan
Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.
[Study on extraction and purification process of total ginsenosides from Radix Ginseng].
Xie, Li-Ling; Ren, Li; Lai, Xian-Sheng; Cao, Jun-Hui; Mo, Quan-Yi; Chen, Wei-Wen
2009-10-01
To optimize the technological parameters of the extraction and purification process of total ginsenosides from Radix Ginseng. With the contents of ginsenoside Rg1, ginsenoside Re and ginsenoside Rb1, the orthogonal design was adopted to optimize the extraction process. The purification process was studied by optimizing the elutive ratio of total ginsenosides as the marker. HPLC and spectrophotometer were employed for the study. The optimum conditions were as follows:Using 8 times volume of 75% ethanol extracting for 120 minutes and 2 times, the extraction temperature was 85 degrees C. AB-8 macroporous resin was selected, and the eluant was 4 BV 70% ethanol. The optimal conditions of extracting and purifying the total ginsenosides from Radix Ginseng is feasible.
[Modeling and analysis of volume conduction based on field-circuit coupling].
Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming
2012-08-01
Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.
Optimizing Endoscope Reprocessing Resources Via Process Flow Queuing Analysis.
Seelen, Mark T; Friend, Tynan H; Levine, Wilton C
2018-05-04
The Massachusetts General Hospital (MGH) is merging its older endoscope processing facilities into a single new facility that will enable high-level disinfection of endoscopes for both the ORs and Endoscopy Suite, leveraging economies of scale for improved patient care and optimal use of resources. Finalized resource planning was necessary for the merging of facilities to optimize staffing and make final equipment selections to support the nearly 33,000 annual endoscopy cases. To accomplish this, we employed operations management methodologies, analyzing the physical process flow of scopes throughout the existing Endoscopy Suite and ORs and mapping the future state capacity of the new reprocessing facility. Further, our analysis required the incorporation of historical case and reprocessing volumes in a multi-server queuing model to identify any potential wait times as a result of the new reprocessing cycle. We also performed sensitivity analysis to understand the impact of future case volume growth. We found that our future-state reprocessing facility, given planned capital expenditures for automated endoscope reprocessors (AERs) and pre-processing sinks, could easily accommodate current scope volume well within the necessary pre-cleaning-to-sink reprocessing time limit recommended by manufacturers. Further, in its current planned state, our model suggested that the future endoscope reprocessing suite at MGH could support an increase in volume of at least 90% over the next several years. Our work suggests that with simple mathematical analysis of historic case data, significant changes to a complex perioperative environment can be made with ease while keeping patient safety as the top priority.
Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene
2016-11-01
A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
2017-01-01
Summary The present study was done to optimize the power ultrasound processing for maximizing diastase activity of and minimizing hydroxymethylfurfural (HMF) content in honey using response surface methodology. Experimental design with treatment time (1-15 min), amplitude (20-100%) and volume (40-80 mL) as independent variables under controlled temperature conditions was studied and it was concluded that treatment time of 8 min, amplitude of 60% and volume of 60 mL give optimal diastase activity and HMF content, i.e. 32.07 Schade units and 30.14 mg/kg, respectively. Further thermal profile analyses were done with initial heating temperatures of 65, 75, 85 and 95 ºC until temperature of honey reached up to 65 ºC followed by holding time of 25 min at 65 ºC, and the results were compared with thermal profile of honey treated with optimized power ultrasound. The quality characteristics like moisture, pH, diastase activity, HMF content, colour parameters and total colour difference were least affected by optimized power ultrasound treatment. Microbiological analysis also showed lower counts of aerobic mesophilic bacteria and in ultrasonically treated honey than in thermally processed honey samples complete destruction of coliforms, yeasts and moulds. Thus, it was concluded that power ultrasound under suggested operating conditions is an alternative nonthermal processing technique for honey. PMID:29540991
NASA Astrophysics Data System (ADS)
Prameswari, I. K.; Manuhara, G. J.; Amanto, B. S.; Atmaka, W.
2018-05-01
Tapioca starch application in bread processing change water absorption level by the dough, while sufficient mixing time makes the optimal water absorption. This research aims to determine the effect of variations in water volume and mixing time on physical properties of tapioca starch – wheat composite bread and the best method for the composite bread processing. This research used Complete Randomized Factorial Design (CRFD) with two factors: variations of water volume (111,8 ml, 117,4 ml, 123 ml) and mixing time (16 minutes, 17 minutes 36 seconds, 19 minutes 12 seconds). The result showed that water volume significantly affected on dough volume, bread volume and specific volume, baking expansion, and crust thickness. Mixing time significantly affected on dough volume and specific volume, bread volume and specific volume, baking expansion, bread height, and crust thickness. While the combination of water volume and mixing time significantly affected for all physical properties parameters except crust thickness.
Xu, Hongyi; Li, Yang; Zeng, Danielle
2017-01-02
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
High Productivity DRIE solutions for 3D-SiP and MEMS Volume Manufacturing
NASA Astrophysics Data System (ADS)
Puech, M.; Thevenoud, JM; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, JM
2006-04-01
Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimised to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters has resulted in ultra high silicon etch rates, with unrivalled uniformity and repeatability leading to excellent process. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer heads and Silicon microphones.
Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick
2014-12-01
As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
Luque-Oliveros, Manuel; Garcia-Carpintero, Maria Angeles; Cauli, Omar
2017-01-01
Patients undergoing cardiac surgery with extracorporeal circulation (ECC) frequently present haemorrhages as a complication associated with high morbidity and mortality. One of the factors that influences this risk is the volume of blood infused during surgery. The objective of this study was to determine the optimal volume of autologous blood that can be processed during cardiac surgery with ECC. We also determined the number of salvaged red blood cells to be reinfused into the patient in order to minimize the risk of haemorrhage in the postoperative period. This was an observational retrospective cross-sectional study performed in 162 ECC cardiac surgery patients. Data regarding the sociodemographic profiles of the patients, their pathologies and surgical treatments, and the blood volume recovered, processed, and reinfused after cell salvage were collected. We also evaluated the occurrence of postoperative haemorrhage. The volume of blood infused after cell salvage had a statistically significant effect (p < 0.01) on the risk of post-operative haemorrhage; the receiver operating characteristic sensitivity was 0.813 and the optimal blood volume cut-off was 1800 ml. The best clinical outcome (16.7% of patients presenting haemorrhages) was in patients that had received less than 1800 ml of recovered and processed autologous blood, which represented a volume of up to 580 ml reinfused red blood cells. The optimum thresholds for autologous processed blood and red blood cells reinfused into the patient were 1800 and 580 ml, respectively. Increasing these thresholds augmented the risk of haemorrhage as an immediate postoperative period complication. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
Dry Volume Fracturing Simulation of Shale Gas Reservoir
NASA Astrophysics Data System (ADS)
Xu, Guixi; Wang, Shuzhong; Luo, Xiangrong; Jing, Zefeng
2017-11-01
Application of CO2 dry fracturing technology to shale gas reservoir development in China has advantages of no water consumption, little reservoir damage and promoting CH4 desorption. This paper uses Meyer simulation to study complex fracture network extension and the distribution characteristics of shale gas reservoirs in the CO2 dry volume fracturing process. The simulation results prove the validity of the modified CO2 dry fracturing fluid used in shale volume fracturing and provides a theoretical basis for the following study on interval optimization of the shale reservoir dry volume fracturing.
Optimization of squalene produced from crude palm oil waste
NASA Astrophysics Data System (ADS)
Wandira, Irda; Legowo, Evita H.; Widiputri, Diah I.
2017-01-01
Squalene is a hydrocarbon originally and still mostly extracted from shark liver oil. Due to environmental issues over shark hunting, there have been efforts to extract squalene from alternative sources, such as Palm Fatty Acid Distillate (PFAD), one of crude palm oil (CPO) wastes. Previous researches have shown that squalene can be extracted from PFAD using saponification process followed with liquid-liquid extraction process although the method had yet to be optimized in order to optimize the amount of squalene extracted from PFAD. The optimization was done by optimizing both processes of squalene extraction method: saponification and liquid-liquid extraction. The factors utilized in the saponification process optimization were KOH concentration and saponification duration while during the liquid-liquid extraction (LLE) process optimization, the factors used were the volumes of distilled water and dichloromethane. The optimum percentage of squalene content in the extract (24.08%) was achieved by saponifying the PFAD with 50%w/v KOH for 60 minutes and subjecting the saponified PFAD to LLE, utilizing 100 ml of distilled water along with 3 times addition of fresh dichloromethane, 75 ml each; those factors would be utilized in the optimum squalene extraction method.
Kessel, Kerstin A; Habermehl, Daniel; Jäger, Andreas; Floca, Ralf O; Zhang, Lanlan; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E
2013-06-07
In radiation oncology recurrence analysis is an important part in the evaluation process and clinical quality assurance of treatment concepts. With the example of 9 patients with locally advanced pancreatic cancer we developed and validated interactive analysis tools to support the evaluation workflow. After an automatic registration of the radiation planning CTs with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence and the distance between the boost and recurrence volume. We calculated the percentage of the recurrence volume within the 80%-isodose volume and compared it to the location of the recurrence within the boost volume, boost + 1 cm, boost + 1.5 cm and boost + 2 cm volumes. Recurrence analysis of 9 patients demonstrated that all recurrences except one occurred within the defined GTV/boost volume; one recurrence developed beyond the field border/outfield. With the defined distance volumes in relation to the recurrences, we could show that 7 recurrent lesions were within the 2 cm radius of the primary tumor. Two large recurrences extended beyond the 2 cm, however, this might be due to very rapid growth and/or late detection of the tumor progression. The main goal of using automatic analysis tools is to reduce time and effort conducting clinical analyses. We showed a first approach and use of a semi-automated workflow for recurrence analysis, which will be continuously optimized. In conclusion, despite the limitations of the automatic calculations we contributed to in-house optimization of subsequent study concepts based on an improved and validated target volume definition.
Optimisation des proprietes physiques d'un composite carbone epoxy fabrique par le procede RFI
NASA Astrophysics Data System (ADS)
Koanda, Mahamat Mamadou Lamine
The RFI (Resin Film Infusion) process is a composite materials manufacturing process. Especially known for the small investment it requires, RFI processes are more and more widely used in the aeronautical industry. However a number of aspects of this process are still not well controlled. The quality of the final part depends on which process is used. In the case of RFI, controlling physical characteristics such as thickness, fiber volume fraction or void content remains a major challenge. This dissertation deals with the optimization of the physical properties of a carbon composite manufactured with RFI processes. The ASTMD3171 and ASTMD792 standards were used to measure the void content and fiber volume fraction. First, we introduced different layup sequences in the RFI process and evaluate their impact on the physical properties of the final product. The experiments show the primary mode A, with the resin film at the bottom, resulting in much better quality with controlled fiber volume fraction and void content. Mode B (film in the symmetrical plane) yields results identical to mode A except more irregular thicknesses. Mode C (symmetrical film in the laminate) produces locally unacceptable void contents. Mode D (resin film on the top of the laminate) yields much better results than mode A with the exception of the more irregular thicknesses. Making gaps and overlaps with the resin film has negative effects beyond 2.54
Linear energy transfer incorporated intensity modulated proton therapy optimization
NASA Astrophysics Data System (ADS)
Cao, Wenhua; Khabazian, Azin; Yepes, Pablo P.; Lim, Gino; Poenisch, Falk; Grosshans, David R.; Mohan, Radhe
2018-01-01
The purpose of this study was to investigate the feasibility of incorporating linear energy transfer (LET) into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. However, if not explicitly incorporated into the optimization criteria, different IMPT plans may yield similar physical dose distributions but greatly different LET, specifically dose-averaged LET, distributions. Conventionally, the IMPT optimization criteria (or cost function) only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. Due to the fractional programming nature of the resulting model, we used a variable reformulation approach so that the optimization process is computationally equivalent to conventional IMPT optimization. In this study, five brain tumor patients who had been treated with proton therapy at our institution were selected. Two plans were created for each patient based on the proposed LET-incorporated optimization (LETOpt) and the conventional dose-based optimization (DoseOpt). The optimized plans were compared in terms of both dose (assuming a constant RBE of 1.1 as adopted in clinical practice) and LET. Both optimization approaches were able to generate comparable dose distributions. The LET-incorporated optimization achieved not only pronounced reduction of LET values in critical organs, such as brainstem and optic chiasm, but also increased LET in target volumes, compared to the conventional dose-based optimization. However, on occasion, there was a need to tradeoff the acceptability of dose and LET distributions. Our conclusion is that the inclusion of LET-dependent criteria in the IMPT optimization could lead to similar dose distributions as the conventional optimization but superior LET distributions in target volumes and normal tissues. This may have substantial advantages in improving tumor control and reducing normal tissue toxicities.
Hurtado, F J; Kaiser, A S; Zamora, B
2015-03-15
Continuous stirred tank reactors (CSTR) are widely used in wastewater treatment plants to reduce the organic matter and microorganism present in sludge by anaerobic digestion. The present study carries out a numerical analysis of the fluid dynamic behaviour of a CSTR in order to optimize the process energetically. The characterization of the sludge flow inside the digester tank, the residence time distribution and the active volume of the reactor under different criteria are determined. The effects of design and power of the mixing system on the active volume of the CSTR are analyzed. The numerical model is solved under non-steady conditions by examining the evolution of the flow during the stop and restart of the mixing system. An intermittent regime of the mixing system, which kept the active volume between 94% and 99%, is achieved. The results obtained can lead to the eventual energy optimization of the mixing system of the CSTR. Copyright © 2014 Elsevier Ltd. All rights reserved.
Numerical simulation and optimization of casting process for complex pump
NASA Astrophysics Data System (ADS)
Liu, Xueqin; Dong, Anping; Wang, Donghong; Lu, Yanling; Zhu, Guoliang
2017-09-01
The complex shape of the casting pump body has large complicated structure and uniform wall thickness, which easy give rise to casting defects. The numerical simulation software ProCAST is used to simulate the initial top gating process, after analysis of the material and structure characteristics of the high-pressure pump. The filling process was overall smooth, not there the water shortage phenomenon. But the circular shrinkage defects appear at the bottom of casting during solidification process. Then, the casting parameters were optimized and adding cold iron in the bottom. The shrinkage weight was reduced from 0.00167g to 0.0005g. The porosity volume was reduced from 1.39cm3 to 0.41cm3. The optimization scheme is simulated and actual experimented. The defect has been significantly improved.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Sizing a rainwater harvesting cistern by minimizing costs
NASA Astrophysics Data System (ADS)
Pelak, Norman; Porporato, Amilcare
2016-10-01
Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.
Simultaneous optimization method for absorption spectroscopy postprocessing.
Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T
2015-05-10
A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
Applications of colored petri net and genetic algorithms to cluster tool scheduling
NASA Astrophysics Data System (ADS)
Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng
2005-12-01
In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.
Ghatnur, Shashidhar M.; Parvatam, Giridhar; Balaraman, Manohar
2015-01-01
Background: Cordyceps sinensis (CS) is a traditional Chinese medicine contains potent active metabolites such as nucleosides and polysaccharides. The submerged cultivation technique is studied for the large scale production of CS for biomass and metabolites production. Objective: To optimize culture conditions for large-scale production of CS1197 biomass and metabolites production. Materials and Methods: The CS1197 strain of CS was isolated from dead larvae of natural CS and the authenticity was assured by the presence of two major markers adenosine and cordycepin by high performance liquid chromatography and mass spectrometry. A three-level Box-Behnken design was employed to optimize process parameters culturing temperature, pH, and inoculum volume for the biomass yield, adenosine and cordycepin. The experimental results were regressed to a second-order polynomial equation by a multiple regression analysis for the prediction of biomass yield, adenosine and cordycepin production. Multiple responses were optimized based on desirability function method. Results: The desirability function suggested the process conditions temperature 28°C, pH 7 and inoculum volume 10% for optimal production of nutraceuticals in the biomass. The water extracts from dried CS1197 mycelia showed good inhibition for 2 diphenyl-1-picrylhydrazyl and 2,2-azinobis-(3-ethyl-benzo-thiazoline-6-sulfonic acid-free radicals. Conclusion: The result suggests that response surface methodology-desirability function coupled approach can successfully optimize the culture conditions for CS1197. SUMMARY Authentication of CS1197 strain by the presence of adenosine and cordycepin and culturing period was determined to be for 14 daysContent of nucleosides in natural CS was found higher than in cultured CS1197 myceliumBox-Behnken design to optimize critical cultural conditions: temperature, pH and inoculum volumeWater extract showed better antioxidant activity proving credible source of natural antioxidants. PMID:26929580
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Optimization of extraction of high purity all-trans-lycopene from tomato pulp waste.
Poojary, Mahesha M; Passamonti, Paolo
2015-12-01
The aim of this work was to optimize the extraction of pure all-trans-lycopene from the pulp fractions of tomato processing waste. A full factorial design (FFD) consisting of four independent variables including extraction temperature (30-50 °C), time (1-60 min), percentage of acetone in n-hexane (25-75%, v/v) and solvent volume (10-30 ml) was used to investigate the effects of process variables on the extraction. The absolute amount of lycopene present in the pulp waste was found to be 0.038 mg/g. The optimal conditions for extraction were as follows: extraction temperature 20 °C, time 40 min, a solvent composition of 25% acetone in n-hexane (v/v) and solvent volume 40 ml. Under these conditions, the maximal recovery of lycopene was 94.7%. The HPLC-DAD analysis demonstrated that, lycopene was obtained in the all-trans-configuration at a very high purity grade of 98.3% while the amount of cis-isomers and other carotenoids were limited. Copyright © 2015 Elsevier Ltd. All rights reserved.
Design optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos
1991-01-01
The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.
Dong, T T X; Zhao, K J; Huang, W Z; Leung, K W; Tsim, K W K
2005-08-01
The root of Panax notoginseng (Radix Notoginseng, Sanqi) is a commonly used traditional Chinese medicine, which is mainly cultivated in Wenshan of Yunnan China. The identified active constituents in Radix Notoginseng include saponin, ssavonoid and polysaccharide; however, the levels of these active constituents vary greatly with different extraction processes. This variation causes a serious problem in standardizing the herbal extract. By using HPLC and spectrophotometry, the contents of notoginsenoside R(1), ginsenoside R(g1), R(b1), R(d), and ssavonoids were determined in the extracts of Radix Notoginseng that were derived from different processes of extraction according to an orthogonal array experimental design having three variable parameters: nature of extraction solvent, extraction volume and extraction time. The nature of extraction solvent and extraction volume were two distinct factors in obtaining those active constituents, while the time of extraction was a subordinate factor. The optimized condition of extraction therefore is considered to be 20 volumes of water and extracted for 24 h. In good agreement with the amount of active constituents, the activity of anti-platelet aggregation was found to be the highest in the extract that contained a better yield of the active constituents. The current results provide an optimized extraction method for the quality control of Radix Notoginseng. Copyright (c) 2005 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, Congjin; Li, Xin; Tong, Zhangfa; Li, Yue; Li, Mingfei
2014-10-01
Granular fir-based activated carbon (GFAC) was modified with H2O2, and orthogonal array experimental design method was used to optimize the process. The properties of the original and modified GFAC were characterized by N2 adsorption-desorption isotherms, Brunauer-Emmett-Teller (BET) equation, Barett-Joyner-Halenda (BJH) equation, field emission scanning electron microscopy (FESEM), and Fourier transform infrared spectroscopy (FT-IR) analysis, etc. When 10.00 g of GFAC with particle size of 0.25-0.85 mm was modified by 150.0 ml of aqueous H2O2 solution, the optimized conditions were found to be as follows: aqueous H2O2 solution concentration 1.0 mol·l-1, modification temperature 30.0 °C, modification time 4.0 h. Modified under the optimized conditions, decolonization of caramel, methylene blue adsorption, phenol adsorption and iodine number of the modified GFAC increased by 500.0%, 59.7%, 32.5%, and 15.1%, respectively. The original and optimally modified GFAC exhibited adsorption isotherms of hybrid Type I-IV isotherms with H4 hysteresis. BET surface area, micropore area, total pore volume, micropore volume, and microporosity of the modified GFAC increased by 7.33%, 11.25%, 3.89%, 14.23%, 9.91%, respectively. Whereas the average pore width decreased by 3.16%. In addition, the amount of surface oxygen groups (such as carbonyl or carboxyl) increased in the modified GFAC.
NASA Technical Reports Server (NTRS)
Welstead, Jason
2014-01-01
This research focused on incorporating stability and control into a multidisciplinary de- sign optimization on a Boeing 737-class advanced concept called the D8.2b. A new method of evaluating the aircraft handling performance using quantitative evaluation of the sys- tem to disturbances, including perturbations, continuous turbulence, and discrete gusts, is presented. A multidisciplinary design optimization was performed using the D8.2b transport air- craft concept. The con guration was optimized for minimum fuel burn using a design range of 3,000 nautical miles. Optimization cases were run using xed tail volume coecients, static trim constraints, and static trim and dynamic response constraints. A Cessna 182T model was used to test the various dynamic analysis components, ensuring the analysis was behaving as expected. Results of the optimizations show that including stability and con- trol in the design process drastically alters the optimal design, indicating that stability and control should be included in conceptual design to avoid system level penalties later in the design process.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Zhu, Bo; Liu, Jianli; Gao, Weidong
2017-09-01
This paper reports on the process optimization of ultrasonic assisted alcoholic-alkaline treatment to prepare granular cold water swelling (GCWS) starches. In this work, three statistical approaches such as Plackett-Burman, steepest ascent path analysis and Box-Behnken design were successfully combined to investigate the effects of major treatment process variables including starch concentration, ethanol volume fraction, sodium hydroxide dosage, ultrasonic power and treatment time, and drying operation, that is, vacuum degree and drying time on cold-water solubility. Results revealed that ethanol volume fraction, sodium hydroxide dosage, applied power and ultrasonic treatment time were significant factors that affected the cold-water solubility of GCWS starches. The maximum cold-water solubility was obtained when treated at 400W of applied power for 27.38min. Optimum volume fraction of ethanol and sodium hydroxide dosage were 66.85% and 53.76mL, respectively. The theoretical values (93.87%) and the observed values (93.87%) were in reasonably good agreement and the deviation was less than 1%. Verification and repeated trial results indicated that the ultrasound-assisted alcoholic-alkaline treatment could be successfully used for the preparation of granular cold water swelling starches at room temperatures and had excellent improvement on the cold-water solubility of GCWS starches. Copyright © 2016. Published by Elsevier B.V.
Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.
2016-01-01
The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463
EUV process establishment through litho and etch for N7 node
NASA Astrophysics Data System (ADS)
Kuwahara, Yuhei; Kawakami, Shinichiro; Kubota, Minoru; Matsunaga, Koichi; Nafus, Kathleen; Foubert, Philippe; Mao, Ming
2016-03-01
Extreme ultraviolet lithography (EUVL) technology is steadily reaching high volume manufacturing for 16nm half pitch node and beyond. However, some challenges, for example scanner availability and resist performance (resolution, CD uniformity (CDU), LWR, etch behavior and so on) are remaining. Advance EUV patterning on the ASML NXE:3300/ CLEAN TRACK LITHIUS Pro Z- EUV litho cluster is launched at imec, allowing for finer pitch patterns for L/S and CH. Tokyo Electron Ltd. and imec are continuously collabo rating to develop manufacturing quality POR processes for NXE:3300. TEL's technologies to enhance CDU, defectivity and LWR/LER can improve patterning performance. The patterning is characterized and optimized in both litho and etch for a more complete understanding of the final patterning performance. This paper reports on post-litho CDU improvement by litho process optimization and also post-etch LWR reduction by litho and etch process optimization.
NASA Astrophysics Data System (ADS)
Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro
2003-05-01
This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.
A new study of the kinetics of curd production in the process of cheese manufacture.
Muñoz, Susana Vargas; Torres, Maykel González; Guerrero, Francisco Quintanilla; Talavera, Rogelio Rodríguez
2017-11-01
We studied the role played by temperature and rennet concentration in the coagulation process for cheese manufacture and the evaluation of their kinetics. We concluded that temperature is the main factor that determines the kinetics. The rennet concentration was unimportant probably due to the fast action of the enzyme chymosin. The Dynamic light scattering technique allowed measuring the aggregate's size and their formation kinetics. The volume fraction of solids was determined from viscosity measurements, showing profiles that are in agreement with the size profiles. The results indicate that the formation of the aggregates for rennet cheese is strongly dependent on temperature and rennet concentration. The results revealed that at 35·5 °C the volume fraction of solids has the maximum slope, indicating that at this temperature the curd is formed rapidly. The optimal temperature throughout the process was established. Second-order kinetics were obtained for the process. We observed a quadratic dependence between the rennet volume and the volume fraction of solids (curd), thereby indicating that the kinetics of the curd production should be of order two.
Primary detection of hardwood log defects using laser surface scanning
Ed Thomas; Liya Thomas; Lamine Mili; Roger Ehrich; A. Lynn Abbott; Clifford Shaffer; Clifford Shaffer
2003-01-01
The use of laser technology to scan hardwood log surfaces for defects holds great promise for improving processing efficiency and the value and volume of lumber produced. External and internal defect detection to optimize hardwood log and lumber processing is one of the top four technological needs in the nation's hardwood industry. The location, type, and...
NASA Astrophysics Data System (ADS)
Au, How Meng
The aircraft design process traditionally starts with a given set of top-level requirements. These requirements can be aircraft performance related such as the fuel consumption, cruise speed, or takeoff field length, etc., or aircraft geometry related such as the cabin height or cabin volume, etc. This thesis proposes a new aircraft design process in which some of the top-level requirements are not explicitly specified. Instead, these previously specified parameters are now determined through the use of the Price-Per-Value-Factor (PPVF) index. This design process is well suited for design projects where general consensus of the top-level requirements does not exist. One example is the design of small commuter airliners. The above mentioned value factor is comprised of productivity, cabin volume, cabin height, cabin pressurization, mission fuel consumption, and field length, each weighted to a different exponent. The relative magnitude and positive/negative signs of these exponents are in agreement with general experience. The value factors of the commuter aircraft are shown to have improved over a period of four decades. In addition, the purchase price is shown to vary linearly with the value factor. The initial aircraft sizing process can be manpower intensive if the calculations are done manually. By incorporating automation into the process, the design cycle can be shortened considerably. The Fortran program functions and subroutines in this dissertation, in addition to the design and optimization methodologies described above, contribute to the reduction of manpower required for the initial sizing process. By combining the new design process mentioned above and the PPVF as the objective function, an optimization study is conducted on the design of a 20-seat regional jet. Handbook methods for aircraft design are written into a Fortran code. A genetic algorithm is used as the optimization scheme. The result of the optimization shows that aircraft designed to this PPVF index can be competitive compared to existing turboprop commuter aircraft. The process developed can be applied to other classes of aircraft with the designer modifying the cost function based upon the design goals.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
A low threshold nanocavity in a two-dimensional 12-fold photonic quasicrystal
NASA Astrophysics Data System (ADS)
Ren, Jie; Sun, XiaoHong; Wang, Shuai
2018-05-01
In this article, a low threshold nanocavity is built and investigated in a two-dimensional 12-fold holographic photonic quasicrystal (PQC). The cavity is formed by using the method of multi-beam common-path interference. By finely adjusting the structure parameters of the cavity, the Q factor and the mode volume are optimized, which are two keys to low-threshold on the basis of Purcell effect. Finally, an optimal cavity is obtained with Q value of 6023 and mode volume of 1.24 ×10-12cm3 . On the other hand, by Fourier Transformation of the electric field components in the cavity, the in-plane wave vectors are calculated and fitted to evaluate the cavity performance. The performance analysis of the cavity further proves the effectiveness of the optimization process. This has a guiding significance for the research of low threshold nano-laser.
Silva, Filipa V M; Martins, Rui C; Silva, Cristina L M
2003-01-01
Cupuaçu (Theobroma grandiflorum) is an Amazonian tropical fruit with a great economic potential. Pasteurization, by a hot-filling technique, was suggested for the preservation of this fruit pulp at room temperature. The process was implemented with local communities in Brazil. The process was modeled, and a computer program was written in Turbo Pascal. The relative importance among the pasteurization process variables (initial product temperature, heating rate, holding temperature and time, container volume and shape, cooling medium type and temperature) on the microbial target and quality was investigated, by performing simulations according to a screening factorial design. Afterward, simulations of the different processing conditions were carried out. The holding temperature (T(F)) and time (t(hold)) affected pasteurization value (P), and the container volume (V) influenced largely the quality parameters. The process was optimized for retail (1 L) and industrial (100 L) size containers, by maximizing volume average quality in terms of color lightness and sensory "fresh notes" and minimizing volume average total color difference and sensory "cooked notes". Equivalent processes were designed and simulated (P(91)( degrees )(C) = 4.6 min on Alicyclobacillus acidoterrestris spores) and final quality (color, flavor, and aroma attributes) was evaluated. Color was slightly affected by the pasteurization processes, and few differences were observed between the six equivalent treatments designed (T(F) between 80 and 97 degrees C). T(F) >/= 91 degrees C minimized "cooked notes" and maximized "fresh notes" of cupuaçu pulp aroma and flavor for 1 L container. Concerning the 100 L size, the "cooked notes" development can be minimized with T(F) >/= 91 degrees C, but overall the quality was greatly degraded as a result of the long cooling times. A more efficient method to speed up the cooling phase was recommended, especially for the industrial size of containers.
Bosch Ojeda, Catalina; Sánchez Rojas, Fuensanta; Cano Pavón, José Manuel
2007-09-01
Ceramic and glass are some of the more recent engineering materials and those that are most resistant to environmental conditions. They belong to advanced materials in that they are being developed for the aerospace and electronics industries. In the last decade, a new class of ceramic materials has been the focus of particular attention. The materials were produced with natural, renewable resources (wood or wood-based products). In this work, we have synthesised a new biomorphic ceramic material from oak wood and Si infiltration. After the material characterization, we have optimized the dissolution of the sample by acid attack in an oven under microwave irradiation. Experimental designs were used as a multivariate strategy for the evaluation of the effects of varying several variables at the same time. The optimization was performed in two steps using factorial design for preliminary evaluation and a Draper-Lin design for determination of the critical experimental conditions. Five variables (time, power, volume of HNO3, volume H2SO4 and volume of HF) were considered as factors and as a response the concentration of different metal ions in the optimization process. Interactions between analytical factors and their optimal levels were investigated using a Draper-Lin design.
Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj
2015-01-01
Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.
Non-rigid Reconstruction of Casting Process with Temperature Feature
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu
2017-09-01
Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.
VOLUMNECT: measuring volumes with Kinect
NASA Astrophysics Data System (ADS)
Quintino Ferreira, Beatriz; Griné, Miguel; Gameiro, Duarte; Costeira, João. Paulo; Sousa Santos, Beatriz
2014-03-01
This article presents a solution to volume measurement object packing using 3D cameras (such as the Microsoft KinectTM). We target application scenarios, such as warehouses or distribution and logistics companies, where it is important to promptly compute package volumes, yet high accuracy is not pivotal. Our application auto- matically detects cuboid objects using the depth camera data and computes their volume and sorting it allowing space optimization. The proposed methodology applies to a point cloud simple computer vision and image processing methods, as connected components, morphological operations and Harris corner detector, producing encouraging results, namely an accuracy in volume measurement of 8mm. Aspects that can be further improved are identified; nevertheless, the current solution is already promising turning out to be cost effective for the envisaged scenarios.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Neuroimaging markers associated with maintenance of optimal memory performance in late-life.
Dekhtyar, Maria; Papp, Kathryn V; Buckley, Rachel; Jacobs, Heidi I L; Schultz, Aaron P; Johnson, Keith A; Sperling, Reisa A; Rentz, Dorene M
2017-06-01
Age-related memory decline has been well-documented; however, some individuals reach their 8th-10th decade while maintaining strong memory performance. To determine which demographic and biomarker factors differentiated top memory performers (aged 75+, top 20% for memory) from their peers and whether top memory performance was maintained over 3 years. Clinically normal adults (n=125, CDR=0; age: 79.5±3.57 years) from the Harvard Aging Brain Study underwent cognitive testing and neuroimaging (amyloid PET, MRI) at baseline and 3-year follow-up. Participants were grouped into Optimal (n=25) vs. Typical (n=100) performers using performance on 3 challenging memory measures. Non-parametric tests were used to compare groups. There were no differences in age, sex, or education between Optimal vs. Typical performers. The Optimal group performed better in Processing Speed (p=0.016) and Executive Functioning (p<0.001). Optimal performers had larger hippocampal volumes at baseline compared with Typical Performers (p=0.027) but no differences in amyloid burden (p=0.442). Twenty-three of the 25 Optimal performers had longitudinal data and16 maintained top memory performance while 7 declined. Non-Maintainers additionally declined in Executive Functioning but not Processing Speed. Longitudinally, there were no hippocampal volume differences between Maintainers and Non-Maintainers, however Non-Maintainers exhibited higher amyloid burden at baseline in contrast with Maintainers (p=0.008). Excellent memory performance in late life does not guarantee protection against cognitive decline. Those who maintain an optimal memory into the 8th and 9th decades may have lower levels of AD pathology. Copyright © 2017. Published by Elsevier Ltd.
Yang, Fengjian; Yang, Lei; Wang, Wenjie; Liu, Yang; Zhao, Chunjian; Zu, Yuangang
2012-01-01
In order to screen a suitable resin for the preparative simultaneous separation and purification of syringin, eleutheroside E and isofraxidin from Acanthopanax senticosus, the adsorption and desorption properties of 17 widely used commercial macroporous resins were evaluated. According to our results, HPD100C, which adsorbs by the molecular tiers model, was the best macroporous resin, offering higher adsorption and desorption capacities and higher adsorption speed for syringin, eleutheroside E and isofraxidin than other resins. Dynamic adsorption and desorption tests were carried out to optimize the process parameters. The optimal conditions were as follows: for adsorption, processing volume: 24 BV, flow rate: 2 BV/h; for desorption, ethanol–water solution: 60:40 (v/v), eluent volume: 4 BV, flow rate: 3 BV/h. Under the above conditions, the contents of syringin, eleutheroside E and isofraxidin increased 174-fold, 20-fold and 5-fold and their recoveries were 80.93%, 93.97% and 93.79%, respectively. PMID:22942746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
Optimal Design of Magnetic ComponentsinPlasma Cutting Power Supply
NASA Astrophysics Data System (ADS)
Jiang, J. F.; Zhu, B. R.; Zhao, W. N.; Yang, X. J.; Tang, H. J.
2017-10-01
Phase-shifted transformer and DC reactor are usually needed in chopper plasma cutting power supply. Because of high power rate, the loss of magnetic components may reach to several kilowatts, which seriously affects the conversion efficiency. Therefore, it is necessary to research and design low loss magnetic components by means of efficient magnetic materials and optimal design methods. The main task in this paper is to compare the core loss of different magnetic material, to analyze the influence of transformer structure, winding arrangement and wire structure on the characteristics of magnetic component. Then another task is to select suitable magnetic material, structure and wire in order to reduce the loss and volume of magnetic components. Based on the above outcome, the optimization design process of transformer and dc reactor are proposed in chopper plasma cutting power supply with a lot of solutions. These solutions are analyzed and compared before the determination of the optimal solution in order to reduce the volume and power loss of the two magnetic components and improve the conversion efficiency of plasma cutting power supply.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
Biodiesel production by direct transesterification of microalgal biomass with co-solvent.
Zhang, Yan; Li, Ya; Zhang, Xu; Tan, Tianwei
2015-11-01
In this study, a direct transesterification process using 75% ethanol and co-solvent was studied to reduce the energy consumption of lipid extraction process and improve the conversion yield of the microalgae biodiesel. The addition of a certain amount of co-solvent (n-hexane is most preferable) was required for the direct transesterification of microalgae biomass. With the optimal reaction condition of n-hexane to 75% ethanol volume ratio 1:2, mixed solvent dosage 6.0mL, reaction temperature 90°C, reaction time 2.0h and catalyst volume 0.6mL, the direct transesterification process of microalgal biomass resulted in a high conversion yield up to 90.02±0.55wt.%. Copyright © 2015 Elsevier Ltd. All rights reserved.
An n -material thresholding method for improving integerness of solutions in topology optimization
Watts, Seth; Tortorelli, Daniel A.
2016-04-10
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less
Application of a Model for Quenching and Partitioning in Hot Stamping of High-Strength Steel
NASA Astrophysics Data System (ADS)
Zhu, Bin; Liu, Zhuang; Wang, Yanan; Rolfe, Bernard; Wang, Liang; Zhang, Yisheng
2018-04-01
Application of quenching and partitioning process in hot stamping has proven to be an effective method to improve the plasticity of advanced high-strength steels (AHSSs). In this study, the hot stamping and partitioning process of advanced high-strength steel 30CrMnSi2Nb is investigated with a hot stamping mold. Given the specific partitioning time and temperature, the influence of quenching temperature on the volume fraction of microstructure evolution and mechanical properties of the above steel are studied in detail. In addition, a model for quenching and partitioning process is applied to predict the carbon diffusion and interface migration during partitioning, which determines the retained austenite volume fraction and final properties of the part. The predicted trends of the retained austenite volume fraction agree with the experimental results. In both cases, the volume fraction of retained austenite increases first and then decreases with the increasing quenching temperature. The optimal quenching temperature is approximately 290 °C for 30CrMnSi2Nb with the partition conditions of 425 °C and 20 seconds. It is suggested that the model can be used to help determine the process parameters to obtain retained austenite as much as possible.
Liu, Zeyu; Su, Zhetong; Yang, Ming; Zou, Wenquan
2010-10-01
To screen the factors that affect indirubin-generated significantly in the process of preparing indigo naturalis, optimize level combination and determine the optimum technology for indirubin-generated. Using concentration of indirubin (mg x g(-1)) that generated by fresh leaf as an index, Plackett-Burman design, Box-Behnken design response surface analysis as the statistical method, we screened the significantly influencing factors and the optimal level combination. The soaking and making indirubin process in preparing indigo naturalis was identified as the wax is not removed before immersion with immersion pH 7, solvent volume-leaf weight (mL: g)15, soaked not avoided light, soaking 48 h, temperature 60 degrees C, ventilation time of 180 min, and added ammonia water to adjust pH to 10.5. The soaking and making indirubin process in preparing indigo naturalis is optimized systematically. It clarify the various factors on the impact of the active ingredient indirubin which controlled by industrialized production become reality in the process of preparing indigo naturalis, at the same time, it lay the foundation for processing principle of indigo naturalis.
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
[PRIORITY TECHNOLOGIES OF THE MEDICAL WASTE DISPOSAL SYSTEM].
Samutin, N M; Butorina, N N; Starodubova, N Yu; Korneychuk, S S; Ustinov, A K
2015-01-01
The annual production of waste in health care institutions (HCI) tends to increase because of the growth of health care provision for population. Among the many criteria for selecting the optimal treatment technologies HCI is important to provide epidemiological and chemical safety of the final products. Environmentally friendly method of thermal disinfection of medical waste may be sterilizators of medical wastes intended for hospitals, medical centers, laboratories and other health care facilities that have small and medium volume of processing of all types of waste Class B and C. The most optimal method of centralized disposal of medical waste is a thermal processing method of the collected material.
Im, Sung-Ju; Choi, Jungwon; Lee, Jung-Gil; Jeong, Sanghyun; Jang, Am
2018-03-01
A new concept of volume-retarded osmosis and low-pressure membrane (VRO-LPM) hybrid process was developed and evaluated for the first time in this study. Commercially available forward osmosis (FO) and ultrafiltration (UF) membranes were employed in a VRO-LPM hybrid process to overcome energy limitations of draw solution (DS) regeneration and production of permeate in the FO process. To evaluate its feasibility as a water reclamation process, and to optimize the operational conditions, cross-flow FO and dead-end mode UF processes were individually evaluated. For the FO process, a DS concentration of 0.15 g mL -1 of polysulfonate styrene (PSS) was determined to be optimal, having a high flux with a low reverse salt flux. The UF membrane with a molecular weight cut-off of 1 kDa was chosen for its high PSS rejection in the LPM process. As a single process, UF (LPM) exhibited a higher flux than FO, but this could be controlled by adjusting the effective membrane area of the FO and UF membranes in the VRO-LPM system. The VRO-LPM hybrid process only required a circulation pump for the FO process. This led to a decrease in the specific energy consumption of the VRO-LPM process for potable water production, that was similar to the single FO process. Therefore, the newly developed VRO-LPM hybrid process, with an appropriate DS selection, can be used as an energy efficient water production method, and can outperform conventional water reclamation processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehrmann, Henning; Aign, Joerg
2013-07-01
In nuclear power plants (NPP) ion exchange (IX) resins are used in several systems for water treatment. Spent resins can contain a significant amount of contaminates which makes treatment for disposal of spent resins mandatory. Several treatment processes are available such as direct immobilization with technologies like cementation, bitumisation, polymer solidification or usage of a high integrity container (HIC). These technologies usually come with a significant increase in final waste volume. The Hot Resin Supercompaction (HRSC) is a thermal treatment process which reduces the resin waste volume significantly. For a mixture of powdered and bead resins the HRSC process hasmore » demonstrated a volume reduction of up to 75 % [1]. For bead resins only the HRSC process is challenging because the bead resins compaction properties are unfavorable. The bead resin material does not form a solid block after compaction and shows a high spring back effect. The volume reduction of bead resins is not as good as for the mixture described in [1]. The compaction properties of bead resin waste can be significantly improved by grinding the beads to powder. The grinding also eliminates the need for a powder additive.Westinghouse has developed a modular grinding process to grind the bead resin to powder. The developed process requires no circulation of resins and enables a selective adjustment of particle size and distribution to achieve optimal results in the HRSC or in any other following process. A special grinding tool setup is use to minimize maintenance and radiation exposure to personnel. (authors)« less
Hu, Yifan; Iordan, Alexandru D.; Moore, Matthew; Dolcos, Florin
2016-01-01
Converging evidence identifies trait optimism and the orbitofrontal cortex (OFC) as personality and brain factors influencing anxiety, but the nature of their relationships remains unclear. Here, the mechanisms underlying the protective role of trait optimism and of increased OFC volume against symptoms of anxiety were investigated in 61 healthy subjects, who completed measures of trait optimism and anxiety, and underwent structural scanning using magnetic resonance imaging. First, the OFC gray matter volume (GMV) was associated with increased optimism, which in turn was associated with reduced anxiety. Second, trait optimism mediated the relation between the left OFC volume and anxiety, thus demonstrating that increased GMV in this brain region protects against symptoms of anxiety through increased optimism. These results provide novel evidence about the brain–personality mechanisms protecting against anxiety symptoms in healthy functioning, and identify potential targets for preventive and therapeutic interventions aimed at reducing susceptibility and increasing resilience against emotional disturbances. PMID:26371336
Souza, C A; Oliveira, T C; Crovella, S; Santos, S M; Rabêlo, K C N; Soriano, E P; Carvalho, M V D; Junior, A F Caldas; Porto, G G; Campello, R I C; Antunes, A A; Queiroz, R A; Souza, S M
2017-04-28
The use of Y chromosome haplotypes, important for the detection of sexual crimes in forensics, has gained prominence with the use of databases that incorporate these genetic profiles in their system. Here, we optimized and validated an amplification protocol for Y chromosome profile retrieval in reference samples using lesser materials than those in commercial kits. FTA ® cards (Flinders Technology Associates) were used to support the oral cells of male individuals, which were amplified directly using the SwabSolution reagent (Promega). First, we optimized and validated the process to define the volume and cycling conditions. Three reference samples and nineteen 1.2 mm-diameter perforated discs were used per sample. Amplification of one or two discs (samples) with the PowerPlex ® Y23 kit (Promega) was performed using 25, 26, and 27 thermal cycles. Twenty percent, 32%, and 100% reagent volumes, one disc, and 26 cycles were used for the control per sample. Thereafter, all samples (N = 270) were amplified using 27 cycles, one disc, and 32% reagents (optimized conditions). Data was analyzed using a study of equilibrium values between fluorophore colors. In the samples analyzed with 20% volume, an imbalance was observed in peak heights, both inside and in-between each dye. In samples amplified with 32% reagents, the values obtained for the intra-color and inter-color standard balance calculations for verification of the quality of the analyzed peaks were similar to those of samples amplified with 100% of the recommended volume. The quality of the profiles obtained with 32% reagents was suitable for insertion into databases.
1990-01-01
THERE WILL BE A CONTINUING NEED FOR A SENSITIVE, RAPID, AND ECONOMICAL TESTING PROCEDURE CAPABLE OF DETECTING DEFECTS AND PROVIDING FEEDBACK FOR QUALITY...SOLUTIONS. THE DKF METHOD PROVIDES OPTIMAL OR NEAR-OPTIMAL ACCURACY, REDUCE PROCESSING BURDEN, AND IMPROVE FAULT TOLERANCE. THE DKF/MMAE ( DMAE ) TECHNIQUES...DEVICES FOR B-SiC IS TO BE ABLE TO CONSISTENTLY PRODUCE INTRINSIC FILMS WITH VERY LOW DEFECTS AND TO DEVELOP SCHOTTKY AND OHMIC CONTACT MATERIALS THAT WILL
Hadi, Pejman; Yeung, Kit Ying; Guo, Jiaxin; Wang, Huaimin; McKay, Gordon
2016-04-01
This paper aims at the sustainable development of activated carbons for value-added applications from the waste tyre pyrolysis product, tyre char, in order to make pyrolysis economically favorable. Two activation process parameters, activation temperature (900, 925, 950 and 975 °C) and residence time (2, 4 and 6 h) with steam as the activating agent have been investigated. The textural properties of the produced tyre char activated carbons have been characterized by nitrogen adsorption-desorption experiments at -196 °C. The activation process has resulted in the production of mesoporous activated carbons confirmed by the existence of hysteresis loops in the N2 adsorption-desorption curves and the pore size distribution curves obtained from BJH method. The BET surface area, total pore volume and mesopore volume of the activated carbons from tyre char have been improved to 732 m(2)/g, 0.91 cm(3)/g and 0.89 cm(3)/g, respectively. It has been observed that the BET surface area, mesopore volume and total pore volume increased linearly with burnoff during activation in the range of experimental parameters studied. Thus, yield-normalized surface area, defined as the surface area of the activated carbon per gram of the precursor, has been introduced to optimize the activation conditions. Accordingly, the optimized activation conditions have been demonstrated as an activation temperature of 975 °C and an activation time of 4 h. Copyright © 2016 Elsevier Ltd. All rights reserved.
Basu-Roy, Somapriya; Kar, Sanjay Kumar; Das, Sounik; Lahiri, Annesha
2017-01-01
Purpose This study is intended to compare dose-volume parameters evaluated using different forward planning- optimization techniques, involving two applicator systems in intracavitary brachytherapy for cervical cancer. It looks for the best applicator-optimization combination to fulfill recommended dose-volume objectives in different high-dose-rate (HDR) fractionation schedules. Material and methods We used tandem-ring and Fletcher-style tandem-ovoid applicator in same patients in two fractions of brachytherapy. Six plans were generated for each patient utilizing 3 forward optimization techniques for each applicator used: equal dwell weight/times (‘no optimization’), ‘manual dwell weight/times’, and ‘graphical’. Plans were normalized to left point A and dose of 8 Gy was prescribed. Dose volume and dose point parameters were compared. Results Without graphical optimization, maximum width and thickness of volume enclosed by 100% isodose line, dose to 90%, and 100% of clinical target volume (CTV); minimum, maximum, median, and average dose to both rectum and bladder are significantly higher with Fletcher applicator. Even if it is done, dose to both points B, minimum dose to CTV, and treatment time; dose to 2 cc (D2cc) rectum and rectal point etc.; D2cc, minimum, maximum, median, and average dose to sigmoid colon; D2cc of bladder remain significantly higher with this applicator. Dose to bladder point is similar (p > 0.05) between two applicators, after all optimization techniques. Conclusions Fletcher applicator generates higher dose to both CTV and organs at risk (2 cc volumes) after all optimization techniques. Dose restriction to rectum is possible using graphical optimization only during selected HDR fractionation schedules. Bladder always receives dose higher than recommended, and 2 cc sigmoid colon always gets permissible dose. Contrarily, graphical optimization with ring applicators fulfills all dose volume objectives in all HDR fractionations practiced. PMID:29204164
Orozco, Raquel; Godfrey, Scott; Coffman, Jon; Amarikwa, Linus; Parker, Stephanie; Hernandez, Lindsay; Wachuku, Chinenye; Mai, Ben; Song, Brian; Hoskatti, Shashidhar; Asong, Jinkeng; Shamlou, Parviz; Bardliving, Cameron; Fiadeiro, Marcus
2017-07-01
We designed, built or 3D printed, and screened tubular reactors that minimize axial dispersion to serve as incubation chambers for continuous virus inactivation of biological products. Empirical residence time distribution data were used to derive each tubular design's volume equivalent to a theoretical plate (VETP) values at a various process flow rates. One design, the Jig in a Box (JIB), yielded the lowest VETP, indicating optimal radial mixing and minimal axial dispersion. A minimum residence time (MRT) approach was employed, where the MRT is the minimum time the product spends in the tubular reactor. This incubation time is typically 60 minutes in a batch process. We provide recommendations for combinations of flow rates and device dimensions for operation of the JIB connected in series that will meet a 60-min MRT. The results show that under a wide range of flow rates and corresponding volumes, it takes 75 ± 3 min for 99% of the product to exit the reactor while meeting the 60-min MRT criterion and fulfilling the constraint of keeping a differential pressure drop under 5 psi. Under these conditions, the VETP increases slightly from 3 to 5 mL though the number of theoretical plates stays constant at about 1326 ± 88. We also demonstrated that the final design volume was only 6% ± 1% larger than the ideal plug flow volume. Using such a device would enable continuous viral inactivation in a truly continuous process or in the effluent of a batch chromatography column. Viral inactivation studies would be required to validate such a design. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:954-965, 2017. © 2017 American Institute of Chemical Engineers.
Microfluidics: a transformational tool for nanomedicine development and production.
Garg, Shyam; Heuck, Gesine; Ip, Shell; Ramsay, Euan
2016-11-01
Microfluidic devices are mircoscale fluidic circuits used to manipulate liquids at the nanoliter scale. The ability to control the mixing of fluids and the continuous nature of the process make it apt for solvent/antisolvent precipitation of drug-delivery nanoparticles. This review describes the use of numerous microfluidic designs for the formulation and production of lipid nanoparticles, liposomes and polymer nanoparticles to encapsulate and deliver small molecule or genetic payloads. The advantages of microfluidics are illustrated through examples from literature comparing conventional processes such as beaker and T-tube mixing to microfluidic approaches. Particular emphasis is placed on examples of microfluidic nanoparticle formulations that have been tested in vitro and in vivo. Fine control of process parameters afforded by microfluidics, allows unprecedented optimization of nanoparticle quality and encapsulation efficiency. Automation improves the reproducibility and optimization of formulations. Furthermore, the continuous nature of the microfluidic process is inherently scalable, allowing optimization at low volumes, which is advantageous with scarce or costly materials, as well as scale-up through process parallelization. Given these advantages, microfluidics is poised to become the new paradigm for nanomedicine formulation and production.
NASA Astrophysics Data System (ADS)
junfeng, Li; zhengying, Wei
2017-11-01
Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.
Optimizing product life cycle processes in design phase
NASA Astrophysics Data System (ADS)
Faneye, Ola. B.; Anderl, Reiner
2002-02-01
Life cycle concepts do not only serve as basis in assisting product developers understand the dependencies between products and their life cycles, they also help in identifying potential opportunities for improvement in products. Common traditional concepts focus mainly on energy and material flow across life phases, necessitating the availability of metrics derived from a reference product. Knowledge of life cycle processes won from an existing product is directly reused in its redesign. Depending on sales volume nevertheless, the environmental impact before product optimization can be substantial. With modern information technologies today, computer-aided life cycle methodologies can be applied well before product use. On the basis of a virtual prototype, life cycle processes are analyzed and optimized, using simulation techniques. This preventive approach does not only help in minimizing (or even eliminating) environmental burdens caused by product, costs incurred due to changes in real product can also be avoided. The paper highlights the relationship between product and life cycle and presents a computer-based methodology for optimizing the product life cycle during design, as presented by SFB 392: Design for Environment - Methods and Tools at Technical University, Darmstadt.
NASA Astrophysics Data System (ADS)
Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.
2018-02-01
The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.
Translator for Optimizing Fluid-Handling Components
NASA Technical Reports Server (NTRS)
Landon, Mark; Perry, Ernest
2007-01-01
A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Aerodynamic shape optimization of a HSCT type configuration with improved surface definition
NASA Technical Reports Server (NTRS)
Thomas, Almuttil M.; Tiwari, Surendra N.
1994-01-01
Two distinct parametrization procedures of generating free-form surfaces to represent aerospace vehicles are presented. The first procedure is the representation using spline functions such as nonuniform rational b-splines (NURBS) and the second is a novel (geometrical) parametrization using solutions to a suitably chosen partial differential equation. The main idea is to develop a surface which is more versatile and can be used in an optimization process. Unstructured volume grid is generated by an advancing front algorithm and solutions obtained using an Euler solver. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an automatic differentiator precompiler software tool. Aerodynamic shape optimization of a complete aircraft with twenty four design variables is performed. High speed civil transport aircraft (HSCT) configurations are targeted to demonstrate the process.
Guo, Liang; Tan, Shufang; Li, Xiao; Lee, Hian Kee
2016-03-18
An automated procedure, combining low density solvent based solvent demulsification dispersive liquid-liquid microextraction (DLLME) with gas chromatography-mass spectrometry analysis, was developed for the determination of polycyclic aromatic hydrocarbons (PAHs) in environmental water samples. Capitalizing on a two-rail commercial autosampler, fast solvent transfer using a large volume syringe dedicated to the DLLME process, and convenient extract collection using a small volume microsyringe for better GC performance were enabled. Extraction parameters including the type and volume of extraction solvent, the type and volume of dispersive solvent and demulsification solvent, extraction and demulsification time, and the speed of solvent injection were investigated and optimized. Under the optimized conditions, the linearity ranged from 0.1 to 50 μg/L, 0.2 to 50 μg/L, and 0.5 to 50 μg/L, depending on the analytes. Limits of detection were determined to be between 0.023 and 0.058 μg/L. The method was applied to determine PAHs in environmental water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal synthesis and design of the number of cycles in the leaching process for surimi production.
Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F
2016-12-01
Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Brito, Isabelle L; de Souza, Evandro Leite; Felex, Suênia Samara Santos; Madruga, Marta Suely; Yamashita, Fábio; Magnani, Marciane
2015-09-01
The aim of this study was to develop a gluten-free formulation of quinoa (Chenopodium quinoa Willd.)-based cookies using experimental design of mixture to optimize a ternary mixture of quinoa flour, quinoa flakes and corn starch for parameters of colour, specific volume and hardness. Nutritional and sensory aspects of the optimized formulation were also assessed. Corn starch had a positive effect on the lightness of the cookies, but increased amounts of quinoa flour and quinoa flakes in the mixture resulted in darker product. Quinoa flour showed a negative effect on the specific volume, producing less bulky cookies, and quinoa flour and quinoa flakes had a positive synergistic effect on the hardness of the cookies. According the results and considering the desirability profile for colour, hardness and specific volume in gluten-free cookies, the optimized formulation contains 30 % quinoa flour, 25 % quinoa flakes and 45 % corn starch. The quinoa-based cookie obtained was characterized as a product rich in dietary fibre, a good source of essential amino acids, linolenic acid and minerals, with good sensory acceptability. These findings reports for the first time the application of quinoa processed as flour and flakes in mixture with corn starch as an alternative ingredient for formulations of gluten-free cookies-type biscuits.
Dolcos, Sanda; Hu, Yifan; Iordan, Alexandru D; Moore, Matthew; Dolcos, Florin
2016-02-01
Converging evidence identifies trait optimism and the orbitofrontal cortex (OFC) as personality and brain factors influencing anxiety, but the nature of their relationships remains unclear. Here, the mechanisms underlying the protective role of trait optimism and of increased OFC volume against symptoms of anxiety were investigated in 61 healthy subjects, who completed measures of trait optimism and anxiety, and underwent structural scanning using magnetic resonance imaging. First, the OFC gray matter volume (GMV) was associated with increased optimism, which in turn was associated with reduced anxiety. Second, trait optimism mediated the relation between the left OFC volume and anxiety, thus demonstrating that increased GMV in this brain region protects against symptoms of anxiety through increased optimism. These results provide novel evidence about the brain-personality mechanisms protecting against anxiety symptoms in healthy functioning, and identify potential targets for preventive and therapeutic interventions aimed at reducing susceptibility and increasing resilience against emotional disturbances. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
2007-11-14
Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory
Mathematical Optimization Techniques
NASA Technical Reports Server (NTRS)
Bellman, R. (Editor)
1963-01-01
The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.
NASA Astrophysics Data System (ADS)
Bruder, Friedrich-Karl; Fäcke, Thomas; Grote, Fabian; Hagen, Rainer; Hönel, Dennis; Koch, Eberhard; Rewitz, Christian; Walze, Günther; Wewer, Brita
2017-05-01
Volume Holographic Optical Elements (vHOEs) gained wide attention as optical combiners for the use in smart glasses and augmented reality (SG and AR, respectively) consumer electronics and automotive head-up display applications. The unique characteristics of these diffractive grating structures - being lightweight, thin and flat - make them perfectly suitable for use in integrated optical components like spectacle lenses and car windshields. While being transparent in Off-Bragg condition, they provide full color capability and adjustable diffraction efficiency. The instant developing photopolymer Bayfol® HX film provides an ideal technology platform to optimize the performance of vHOEs in a wide range of applications. Important for any commercialization are simple and robust mass production schemes. In this paper, we present an efficient and easy to control one-beam recording scheme to copy a so-called master vHOE in a step-and-repeat process. In this contact-copy scheme, Bayfol® HX film is laminated to a master stack before being exposed by a scanning laser line. Subsequently, the film is delaminated in a controlled fashion and bleached. We explain working principles of the one-beam copy concept, discuss the opto-mechanical construction and outline the downstream process of the installed vHOE replication line. Moreover, we focus on aspects like performance optimization of the copy vHOE, the bleaching process and the suitable choice of protective cover film in the re-lamination step, preparing the integration of the vHOE into the final device.
Numerical grid generation in computational field simulations. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, B.K.; Thompson, J.F.; Haeuser, J.
1996-12-31
To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplinesmore » including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.« less
NASA Astrophysics Data System (ADS)
Wahyudi, Slamet Imam; Adi, Henny Pratiwi; Santoso, Esti; Heikoop, Rick
2017-03-01
Settlement in the Jati District, Kudus Regency, Central Java Province, Indonesia, is growing rapidly. Previous paddy fields area turns into new residential, industrial and office buildings. The rain water collected in small Kencing river that flows into big Wulan River. But the current condition, during high rain intensity Wulan river water elevation higher than the Kencing river, so that water can not flow gravity and the area inundated. To reduce the flooding, required polder drainage system by providing a long channel as water storage and pumping water into Wulan river. How to get optimal value of water storage volume, drainage system channels and the pump capacity? The result used to be efficient in the operation and maintenance of the polder system. The purpose of this study is to develop some scenarios water storage volume, water gate operation and to get the optimal value of operational pumps removing water from the Kencing River to Wulan River. Research Method is conducted by some steps. The first step, it is done field orientation in detail, then collecting secondary data including maps and rainfall data. The map is processed into Watershed or catchment area, while the rainfall data is processed into runoff discharge. Furthermore, the team collects primary data by measuring topography to determine the surface and volume of water storage. The analysis conducted to determine of flood discharge, water channel hydraulics, water storage volume and pump capacity corresponding. Based on the simulating of long water storage volume and pump capacity with some scenario trying, it can be determined optimum values. The results used to be guideline in to construction proses, operation and maintenance of the drainage polder system.
Fojtu, Michaela; Gumulec, Jaromir; Balvan, Jan; Raudenska, Martina; Sztalmachova, Marketa; Polanska, Hana; Smerkova, Kristyna; Adam, Vojtech; Kizek, Rene; Masarik, Michal
2014-02-01
Determination of serum mRNA gained a lot of attention in recent years, particularly from the perspective of disease markers. Streptavidin-modified paramagnetic particles (SMPs) seem an interesting technique, mainly due to possible automated isolation and high efficiency. The aim of this study was to optimize serum isolation protocol to reduce the consumption of chemicals and sample volume. The following factors were optimized: amounts of (i) paramagnetic particles, (ii) oligo(dT)20 probe, (iii) serum, and (iv) the binding sequence (SMPs, oligo(dT)20 , serum vs. oligo(dT)20 , serum and SMPs). RNA content was measured, and the expression of metallothionein-2A as possible prostate cancer marker was analyzed to demonstrate measurable RNA content with ability for RT-PCR detection. Isolation is possible on serum volume range (10-200 μL) without altering of efficiency or purity. Amount of SMPs can be reduced up to 5 μL, with optimal results within 10-30 μL SMPs. Volume of oligo(dT)20 does not affect efficiency, when used within 0.1-0.4 μL. This optimized protocol was also modified to fit needs of automated one-step single-tube analysis with identical efficiency compared to conventional setup. One-step analysis protocol is considered a promising simplification, making RNA isolation suitable for automatable process. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wang, Lan-Ying; Cheong, Kit-Leong; Wu, Ding-Tao; Meng, Lan-Zhen; Zhao, Jing; Li, Shao-Ping
2015-08-01
The optimal fermentation conditions and medium for the production of bioactive polysaccharides from the mycelium of Cordyceps sinensis fungus UM01 were investigated by using orthogonal design and high performance size exclusion chromatography coupled with multi-angel laser light scattering and refractive index detector (HPSEC-MALLS-RID). Results showed that the optimal temperature, initial pH, rotation speed, medium capacity (ratio of medium volume to the volume of flask bottle) and inoculums volume for the mycelium growth were 15 °C, pH 6.0, 150 rpm, 2/5 (v/v), and 3% (v/v), respectively. Furthermore, bioactive polysaccharides from the mycelium of C. sinensis fungus UM01 were determined as polysaccharide fractions with the molecular weight above 10 kDa. The optimal fermentation medium was determined as a composition of glucose 30.0 g/L, sucrose 30.0 g/L, KH2PO4 1.0 g/L, CaCl2 0.5 g/L, yeast extract 3.0 g/L, and MgCl2 0.1g/L according to the maximum amount of the bioactive polysaccharides (486.16±19.60 mg/L) measured by HPSEC-MALLS/RID. Results are helpful to establish an efficient and controllable fermentation process for the industrial production of bioactive polysaccharides from C. sinensis UM01, and beneficial to develop a unique health and functional product in future. Copyright © 2015 Elsevier B.V. All rights reserved.
Data Storing Proposal from Heterogeneous Systems into a Specialized Repository
NASA Astrophysics Data System (ADS)
Václavová, Andrea; Tanuška, Pavol; Jánošík, Ján
2016-12-01
The aim of this paper is to analyze and to propose an appropriate system for processing and simultaneously storing a vast volume of structured and unstructured data. The paper consists of three parts. The first part addresses the issue of structured and unstructured data. The second part provides the detailed analysis of data repositories and subsequent evaluation indicating which system would be for the given type and volume of data optimal. The third part focuses on the use of gathered information to transfer data to the proposed repository.
Study on Edge Thickening Flow Forming Using the Finite Elements Analysis
NASA Astrophysics Data System (ADS)
Kim, Young Jin; Park, Jin Sung; Cho, Chongdu
2011-08-01
This study is to examine the forming features of flow stress property and the incremental forming method with increasing the thickness of material. Recently, the optimized forming method is widely studied through the finite element analysis to optimize forming process conditions in many different forming fields. The optimal forming method should be adopted to meet geometric requirements as the reduction in volume per unit length of material such as forging, rolling, spinning etc. However conventional studies have not dealt with issue regarding volume per unit length. For the study we use the finite element method and model a gear part of an automotive engine flywheel as the study model, which is a weld assembly of a plate and a gear with respective different thickness. In simulation of the present study, a optimized forming condition for gear machining, considering the thickness of the outer edge of flywheel is studied using the finite elements analysis for the increasing thickness of the forming method. It is concluded from the study that forming method to increase the thickness per unit length for gear machining is reasonable using the finite elements analysis and forming test.
Effects of solution volume on hydrogen production by pulsed spark discharge in ethanol solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xin, Y. B.; Sun, B., E-mail: sunb88@dlmu.edu.cn; Zhu, X. M.
2016-07-15
Hydrogen production from ethanol solution (ethanol/water) by pulsed spark discharge was optimized by varying the volume of ethanol solution (liquid volume). Hydrogen yield was initially increased and then decreased with the increase in solution volume, which achieved 1.5 l/min with a solution volume of 500 ml. The characteristics of pulsed spark discharge were studied in this work; the results showed that the intensity of peak current, the rate of current rise, and energy efficiency of hydrogen production can be changed by varying the volume of ethanol solution. Meanwhile, the mechanism analysis of hydrogen production was accomplished by monitoring the process of hydrogenmore » production and the state of free radicals. The analysis showed that decreasing the retention time of gas production and properly increasing the volume of ethanol solution can enhance the hydrogen yield. Through this research, a high-yield and large-scale method of hydrogen production can be achieved, which is more suitable for industrial application.« less
Conjecture Mapping to Optimize the Educational Design Research Process
ERIC Educational Resources Information Center
Wozniak, Helen
2015-01-01
While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…
ERIC Educational Resources Information Center
Schlager, Kenneth J.
2008-01-01
This report describes a communications system engineering planning process that demonstrates an ability to design and deploy cost-effective broadband networks in low density rural areas. The emphasis in on innovative solutions and systems optimization because of the marginal nature of rural telecommunications infrastructure investments. Otherwise,…
NASA Astrophysics Data System (ADS)
Areeprasert, C.; Leelachaikul, P.; Jangkobpattana, G.; Phumprasop, K.; Kiattiwat, T.
2018-02-01
This paper presents an investigation on carbonization process of simulated municipal solid waste (MSW). Simulated MSW consists of a representative of food residue (68%), plastic waste (20%), paper (8%), and textile (4%). Laboratory-scale carbonization was performed in this study using a vertical-type pyrolyzer varying carbonization temperature (300, 350, 400, and 450 °C) and heating rate (5, 10, 15, and 20 °C/min). Appearance of the biochar product was in black and the volume was significantly reduced. Low carbonization temperature (300 °C) might not completely decompose plastic materials in MSW. Results showed that the carbonization at the temperature of 400 °C with the heating rate of 5 °C/min was the optimal condition. The yield of biochar from the optimal process was 50.6% with the heating value of 26.85 MJ/kg. Energy input of the process was attributed to water evaporation and the decomposition of plastics and paper. Energy output of the process was highest at the optimal condition. Energy output and input ratio was around 1.3-1.7 showing the feasibility of the carbonization process in all heating rate condition.
NASA Astrophysics Data System (ADS)
Zhang, Pengpeng; Hunt, Margie; Happersett, Laura; Yang, Jie; Zelefsky, Michael; Mageras, Gig
2013-11-01
To develop an optimization algorithm for volumetric modulated arc therapy which incorporates an electromagnetic tracking (EMT) guided gating strategy and is robust to residual intra-fractional motion uncertainties. In a computer simulation, intra-fractional motion traces from prior treatments with EMT were converted to a probability distribution function (PDF), truncated using a patient specific action volume that encloses allowed deviations from the planned position, and renormalized to yield a new PDF with EMT-gated interventions. In lieu of a conventional planning target volume (PTV), multiple instances of clinical target volume (CTV) and organs at risk (OARs) were replicated and displaced to extreme positions inside the action volume representing possible delivery scenarios. When optimizing the volumetric modulated arc therapy plan, doses to the CTV and OARs were calculated as a sum of doses to the replicas weighted by the PDF to account for motion. A treatment plan meeting the clinical constraints was produced and compared to the counterpart conventional margin (PTV) plan. EMT traces from a separate testing database served to simulate motion during gated delivery. Dosimetric end points extracted from dose accumulations for each motion trace were utilized to evaluate potential clinical benefit. Five prostate cases from a hypofractionated protocol (42.5 Gy in 5 fractions) were retrospectively investigated. The patient specific gating window resulted in tight anterior and inferior action levels (∼1 mm) to protect rectal wall and bladder wall, and resulted in an average of four beam interruptions per fraction in the simulation. The robust-optimized plans achieved the same average CTV D95 coverage of 40.5 Gy as the PTV-optimized plans, but with reduced patient-averaged rectum wall D1cc by 2.2 Gy (range 0.7 to 4.7 Gy) and bladder wall mean dose by 2.9 Gy (range 2.0 to 3.4 Gy). Integration of an intra-fractional motion management strategy into the robust optimization process is feasible and may yield improved OAR sparing compared to the standard margin approach.
Zhang, Pengpeng; Hunt, Margie; Happersett, Laura; Yang, Jie; Zelefsky, Michael; Mageras, Gig
2013-11-07
To develop an optimization algorithm for volumetric modulated arc therapy which incorporates an electromagnetic tracking (EMT) guided gating strategy and is robust to residual intra-fractional motion uncertainties. In a computer simulation, intra-fractional motion traces from prior treatments with EMT were converted to a probability distribution function (PDF), truncated using a patient specific action volume that encloses allowed deviations from the planned position, and renormalized to yield a new PDF with EMT-gated interventions. In lieu of a conventional planning target volume (PTV), multiple instances of clinical target volume (CTV) and organs at risk (OARs) were replicated and displaced to extreme positions inside the action volume representing possible delivery scenarios. When optimizing the volumetric modulated arc therapy plan, doses to the CTV and OARs were calculated as a sum of doses to the replicas weighted by the PDF to account for motion. A treatment plan meeting the clinical constraints was produced and compared to the counterpart conventional margin (PTV) plan. EMT traces from a separate testing database served to simulate motion during gated delivery. Dosimetric end points extracted from dose accumulations for each motion trace were utilized to evaluate potential clinical benefit. Five prostate cases from a hypofractionated protocol (42.5 Gy in 5 fractions) were retrospectively investigated. The patient specific gating window resulted in tight anterior and inferior action levels (~1 mm) to protect rectal wall and bladder wall, and resulted in an average of four beam interruptions per fraction in the simulation. The robust-optimized plans achieved the same average CTV D95 coverage of 40.5 Gy as the PTV-optimized plans, but with reduced patient-averaged rectum wall D1cc by 2.2 Gy (range 0.7 to 4.7 Gy) and bladder wall mean dose by 2.9 Gy (range 2.0 to 3.4 Gy). Integration of an intra-fractional motion management strategy into the robust optimization process is feasible and may yield improved OAR sparing compared to the standard margin approach.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Mosaddeghi, Mohammad Reza; Pajoum Shariati, Farshid; Vaziri Yazdi, Seyed Ali; Nabi Bidhendi, Gholamreza
2018-06-21
The wastewater produced in a pulp and paper industry is one of the most polluted industrial wastewaters, and therefore its treatment requires complex processes. One of the simple and feasible processes in pulp and paper wastewater treatment is coagulation and flocculation. Overusing a chemical coagulant can produce a large volume of sludge and increase costs and health concerns. Therefore, the use of natural and plant-based coagulants has been recently attracted the attention of researchers. One of the advantages of using Ocimum basilicum as a coagulant is a reduction in the amount of chemical coagulant required. In this study, the effect of basil mucilage has been investigated as a plant-based coagulant together with alum for treatment of paper recycling wastewater. Response surface methodology (RSM) was used to optimize the process of chemical coagulation based on a central composite rotatable design (CCRD). Quadratic models for colour reduction and TSS removal with coefficients of determination of R 2 >96 were obtained using the analysis of variance. Under optimal conditions, removal efficiencies of colour and total suspended solids (TSS) were 85% and 82%, respectively.
DOT National Transportation Integrated Search
1988-10-01
This second volume of the study entitled, Optimizing Wartime Materiel Delivery: An Overview of DOD Containerization Efforts, -outlines a framework for action to address containerization issues identified in Volume I. The objectives of the study inclu...
Stormwater runoff characterized by GIS determined source areas and runoff volumes.
Liu, Yang; Soonthornnonda, Puripus; Li, Jin; Christensen, Erik R
2011-02-01
Runoff coefficients are usually considered in isolation for each drainage area with resulting large uncertainties in the areas and coefficients. Accurate areas and coefficients are obtained here by optimizing runoff coefficients for characteristic Geographic Information Systems (GIS) subareas within each drainage area so that the resulting runoff coefficients of each drainage area are consistent with those obtained from runoff and rainfall volumes. Lack of fit can indicate that the ArcGIS information is inaccurate or more likely, that the drainage area needs adjustment. Results for 18 drainage areas in Milwaukee, WI for 2000-2004 indicate runoff coefficients ranging from 0.123 for a mostly residential area to 0.679 for a freeway-related land, with a standard error of 0.047. Optimized runoff coefficients are necessary input parameters for monitoring, and for the analysis and design of in situ stormwater unit operations and processes for the control of both urban runoff quantity and quality.
Luo, Xiongbiao; Mori, Kensaku
2014-06-01
Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.
Study of Research and Development Processes through Fuzzy Super FRM Model and Optimization Solutions
Sârbu, Flavius Aurelian; Moga, Monika; Calefariu, Gavrilă; Boșcoianu, Mircea
2015-01-01
The aim of this study is to measure resources for R&D (research and development) at the regional level in Romania and also obtain primary data that will be important in making the right decisions to increase competitiveness and development based on an economic knowledge. As our motivation, we would like to emphasize that by the use of Super Fuzzy FRM model we want to determine the state of R&D processes at regional level using a mean different from the statistical survey, while by the two optimization methods we mean to provide optimization solutions for the R&D actions of the enterprises. Therefore to fulfill the above mentioned aim in this application-oriented paper we decided to use a questionnaire and for the interpretation of the results the Super Fuzzy FRM model, representing the main novelty of our paper, as this theory provides a formalism based on matrix calculus, which allows processing of large volumes of information and also delivers results difficult or impossible to see, through statistical processing. Furthermore another novelty of the paper represents the optimization solutions submitted in this work, given for the situation when the sales price is variable, and the quantity sold is constant in time and for the reverse situation. PMID:25821846
Process optimization by use of design of experiments: Application for liposomalization of FK506.
Toyota, Hiroyasu; Asai, Tomohiro; Oku, Naoto
2017-05-01
Design of experiments (DoE) can accelerate the optimization of drug formulations, especially complexed formulas such as those of drugs, using delivery systems. Administration of FK506 encapsulated in liposomes (FK506 liposomes) is an effective approach to treat acute stroke in animal studies. To provide FK506 liposomes as a brain protective agent, it is necessary to manufacture these liposomes with good reproducibility. The objective of this study was to confirm the usefulness of DoE for the process-optimization study of FK506 liposomes. The Box-Behnken design was used to evaluate the effect of the process parameters on the properties of FK506 liposomes. The results of multiple regression analysis showed that there was interaction between the hydration temperature and the freeze-thaw cycle on both the particle size and encapsulation efficiency. An increase in the PBS hydration volume resulted in an increase in encapsulation efficiency. Process parameters had no effect on the ζ-potential. The multiple regression equation showed good predictability of the particle size and the encapsulation efficiency. These results indicated that manufacturing conditions must be taken into consideration to prepare liposomes with desirable properties. DoE would thus be promising approach to optimize the conditions for the manufacturing of liposomes. Copyright © 2017 Elsevier B.V. All rights reserved.
Haghbin, Amin; Liaghat, Gholamhossein; Arabi, Amir Masoud; Pol, Mohammad Hossein
2017-01-01
In this work, an electrophoretic deposition (EPD) technique has been used for deposition of carbon nanotubes (CNTs) on the surface of glass fiber textures (GTs) to increase the volume conductivity and the interlaminar shear strength (ILSS) of CNT/glass fiber-reinforced polymers (GFRPs) composites. Comprehensive experimental studies have been conducted to establish the influence of electric field strength, CNT concentration in EPD suspension, surface quality of GTs, and process duration on the quality of deposited CNT layers. CNT deposition increased remarkably when the surface of glass fibers was treated with coupling agents. Deposition of CNTs was optimized by measuring CNT’s deposition mass and process current density diagrams. The effect of optimum field strength on CNT deposition mass is around 8.5 times, and the effect of optimum suspension concentration on deposition rate is around 5.5 times. In the optimum experimental setting, the current density values of EPD were bounded between 0.5 and 1 mA/cm2. Based on the cumulative deposition diagram, it was found that the first three minutes of EPD is the effective deposition time. Applying optimized EPD in composite fabrication of treated GTs caused a drastic improvement on the order of 108 times in the volume conductivity of the nanocomposite laminate in comparison with simple GTs specimens. Optimized CNT deposition also enhanced the ILSS of hierarchical nanocomposites by 42%. PMID:28937635
Characterizing and Optimizing the Performance of the MAESTRO 49-Core Processor
2014-03-27
process large volumes of data, it is necessary during testing to vary the dimensions of the inbound data matrix to determine what effect this has on the...needed that can process the extra data these systems seek to collect. However, the space environment presents a number of threats, such as ambient or...induced faults, and that also have sufficient computational power to handle the large flow of data they encounter. This research investigates one
USAF Logistics Process Optimization Study for the Aircraft Asset Sustainment Process. Volume 1.
1998-12-31
solely to have a record that could be matched with the CMOS receipt data. (This problem is caused by DLA systems that currently do not populate CMOS with...unable to obtain passwords to the Depot D035 systems. Figure 16 shows daily savings as of 30 September 1998 (current time frame ) and projects savings...Engineering, modeling, and systems/software development company LAN Local Area Network LFA Large Frame Aircraft LMA Logistics Management Agency LMR
Panayi, Efstathios; Peters, Gareth W; Kyriakides, George
2017-01-01
Quantifying the effects of environmental factors over the duration of the growing process on Agaricus Bisporus (button mushroom) yields has been difficult, as common functional data analysis approaches require fixed length functional data. The data available from commercial growers, however, is of variable duration, due to commercial considerations. We employ a recently proposed regression technique termed Variable-Domain Functional Regression in order to be able to accommodate these irregular-length datasets. In this way, we are able to quantify the contribution of covariates such as temperature, humidity and water spraying volumes across the growing process, and for different lengths of growing processes. Our results indicate that optimal oxygen and temperature levels vary across the growing cycle and we propose environmental schedules for these covariates to optimise overall yields.
Panayi, Efstathios; Kyriakides, George
2017-01-01
Quantifying the effects of environmental factors over the duration of the growing process on Agaricus Bisporus (button mushroom) yields has been difficult, as common functional data analysis approaches require fixed length functional data. The data available from commercial growers, however, is of variable duration, due to commercial considerations. We employ a recently proposed regression technique termed Variable-Domain Functional Regression in order to be able to accommodate these irregular-length datasets. In this way, we are able to quantify the contribution of covariates such as temperature, humidity and water spraying volumes across the growing process, and for different lengths of growing processes. Our results indicate that optimal oxygen and temperature levels vary across the growing cycle and we propose environmental schedules for these covariates to optimise overall yields. PMID:28961254
NASA Technical Reports Server (NTRS)
Welstead, Jason; Crouse, Gilbert L., Jr.
2014-01-01
Empirical sizing guidelines such as tail volume coefficients have long been used in the early aircraft design phases for sizing stabilizers, resulting in conservatively stable aircraft. While successful, this results in increased empty weight, reduced performance, and greater procurement and operational cost relative to an aircraft with optimally sized surfaces. Including flight dynamics in the conceptual design process allows the design to move away from empirical methods while implementing modern control techniques. A challenge of flight dynamics and control is the numerous design variables, which are changing fluidly throughout the conceptual design process, required to evaluate the system response to some disturbance. This research focuses on addressing that challenge not by implementing higher order tools, such as computational fluid dynamics, but instead by linking the lower order tools typically used within the conceptual design process so each discipline feeds into the other. In thisresearch, flight dynamics and control was incorporated into the conceptual design process along with the traditional disciplines of vehicle sizing, weight estimation, aerodynamics, and performance. For the controller, a linear quadratic regulator structure with constant gains has been specified to reduce the user input. Coupling all the disciplines in the conceptual design phase allows the aircraft designer to explore larger design spaces where stabilizers are sized according to dynamic response constraints rather than historical static margin and volume coefficient guidelines.
AAFE man-made noise experiment project. Volume 2: Project and experiment discussions
NASA Technical Reports Server (NTRS)
1974-01-01
An experiment for the acquisition and processing of man-made noise interference data on earth orbital altitudes is discussed. The objectives of the project are to confirm the results of analytical studies concerning radio frequency man-made noise in space. It is stated that the measurements of the amounts and types of noise in frequency bands of interest could allow the allocation and utilization of frequencies to be optimized and would also contribute to the engineering objective of optimizing flight receiving systems. A second objective of the project was to design and fabricate a noise measuring receiver which would demonstrate the feasibility of the experiment design under the project. The procedures for acquiring and processing the electromagnetic radiation data are discussed.
Analysis of laser surgery in non-melanoma skin cancer for optimal tissue removal
NASA Astrophysics Data System (ADS)
Fanjul-Vélez, Félix; Salas-García, Irene; Arce-Diego, José Luis
2015-02-01
Laser surgery is a commonly used technique for tissue ablation or the resection of malignant tumors. It presents advantages over conventional non-optical ablation techniques, like a scalpel or electrosurgery, such as the increased precision of the resected volume, minimization of scars and shorter recovery periods. Laser surgery is employed in medical branches such as ophthalmology or dermatology. The application of laser surgery requires the optimal adjustment of laser beam parameters, taking into account the particular patient and lesion. In this work we present a predictive tool for tissue resection in biological tissue after laser surgery, which allows an a priori knowledge of the tissue ablation volume, area and depth. The model employs a Monte Carlo 3D approach for optical propagation and a rate equation for plasma-induced ablation. The tool takes into account characteristics of the specific lesion to be ablated, mainly the geometric, optical and ablation properties. It also considers the parameters of the laser beam, such as the radius, spatial profile, pulse width, total delivered energy or wavelength. The predictive tool is applied to dermatology tumor resection, particularly to different types of non-melanoma skin cancer tumors: basocellular carcinoma, squamous cell carcinoma and infiltrative carcinoma. The ablation volume, area and depth are calculated for healthy skin and for each type of tumor as a function of the laser beam parameters. The tool could be used for laser surgery planning before the clinical application. The laser parameters could be adjusted for optimal resection volume, by personalizing the process to the particular patient and lesion.
NASA Astrophysics Data System (ADS)
Bucay, Igal; Helal, Ahmed; Dunsky, David; Leviyev, Alex; Mallavarapu, Akhila; Sreenivasan, S. V.; Raizen, Mark
2017-04-01
Ionization of atoms and molecules is an important process in many applications and processes such as mass spectrometry. Ionization is typically accomplished by electron bombardment, and while it is scalable to large volumes, is also very inefficient due to the small cross section of electron-atom collisions. Photoionization methods can be highly efficient, but are not scalable due to the small ionization volume. Electric field ionization is accomplished using ultra-sharp conducting tips biased to a few kilovolts, but suffers from a low ionization volume and tip fabrication limitations. We report on our progress towards an efficient, robust, and scalable method of atomic and molecular ionization using orderly arrays of sharp, gold-doped silicon nanowires. As demonstrated in earlier work, the presence of the gold greatly enhances the ionization probability, which was attributed to an increase in available acceptor surface states. We present here a novel process used to fabricate the nanowire array, results of simulations aimed at optimizing the configuration of the array, and our progress towards demonstrating efficient and scalable ionization.
Sanchez Lopez, Hector; Freschi, Fabio; Trakic, Adnan; Smith, Elliot; Herbert, Jeremy; Fuentes, Miguel; Wilson, Stephen; Liu, Limei; Repetto, Maurizio; Crozier, Stuart
2014-05-01
This article aims to present a fast, efficient and accurate multi-layer integral method (MIM) for the evaluation of complex spatiotemporal eddy currents in nonmagnetic and thin volumes of irregular geometries induced by arbitrary arrangements of gradient coils. The volume of interest is divided into a number of layers, wherein the thickness of each layer is assumed to be smaller than the skin depth and where one of the linear dimensions is much smaller than the remaining two dimensions. The diffusion equation of the current density is solved both in time-harmonic and transient domain. The experimentally measured magnetic fields produced by the coil and the induced eddy currents as well as the corresponding time-decay constants were in close agreement with the results produced by the MIM. Relevant parameters such as power loss and force induced by the eddy currents in a split cryostat were simulated using the MIM. The proposed method is capable of accurately simulating the current diffusion process inside thin volumes, such as the magnet cryostat. The method permits the priori-calculation of optimal pre-emphasis parameters. The MIM enables unified designs of gradient coil-magnet structures for an optimal mitigation of deleterious eddy current effects. Copyright © 2013 Wiley Periodicals, Inc.
Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system
NASA Astrophysics Data System (ADS)
Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU
2018-03-01
The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.
Liao, Zhipeng; Chen, Junning; Li, Wei; Darendeliler, M Ali; Swain, Michael; Li, Qing
2016-06-01
This paper aimed to precisely locate centres of resistance (CRe) of maxillary teeth and investigate optimal orthodontic force by identifying the effective zones of orthodontic tooth movement (OTM) from hydrostatic stress thresholds in the periodontal ligament (PDL). We applied distally-directed tipping and bodily forces ranging from 0.075 N to 3 N (7.5 g to 300 g) onto human maxillary teeth. The hydrostatic stress was quantified from nonlinear finite element analysis (FEA) and compared with normal capillary and systolic blood pressure for driving the tissue remodelling. Two biomechanical stimuli featuring localised and volume-averaged hydrostatic stresses were introduced to describe OTM. Locations of CRe were determined through iterative FEA simulation. Accurate locations of CRes of teeth and ranges of optimal orthodontic forces were obtained. By comparing with clinical results in literature, the volume average of hydrostatic stress in PDL was proved to describe the process of OTM more indicatively. The optimal orthodontic forces obtained from the in-silico modelling study echoed with the clinical results in vivo. A universal moment to force (M/F) ratio is not recommended due to the variation in patients and loading points. Accurate computational determination of CRe location can be applied in practice to facilitate orthodontic treatment. Global measurement of hydrostatic pressure in the PDL better characterised OTM, implying that OTM occurs only when the majority of PDL volume is critically stressed. The FEA results provide new insights into relevant orthodontic biomechanics and help establish optimal orthodontic force for a specific patient. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dehydration, hemodynamics and fluid volume optimization after induction of general anesthesia.
Li, Yuhong; He, Rui; Ying, Xiaojiang; Hahn, Robert G
2014-01-01
Fluid volume optimization guided by stroke volume measurements reduces complications of colorectal and high-risk surgeries. We studied whether dehydration or a strong hemodynamic response to general anesthesia increases the probability of fluid responsiveness before surgery begins. Cardiac output, stroke volume, central venous pressure and arterial pressures were measured in 111 patients before general anesthesia (baseline), after induction and stepwise after three bolus infusions of 3 ml/kg of 6% hydroxyethyl starch 130/0.4 (n=86) or Ringer's lactate (n=25). A subgroup of 30 patients who received starch were preloaded with 500 ml of Ringer's lactate. Blood volume changes were estimated from the hemoglobin concentration and dehydration was estimated from evidence of renal water conservation in urine samples. Induction of anesthesia decreased the stroke volume to 62% of baseline (mean); administration of fluids restored this value to 84% (starch) and 68% (Ringer's). The optimized stroke volume index was clustered around 35-40 ml/m2/beat. Additional fluid boluses increased the stroke volume by ≥10% (a sign of fluid responsiveness) in patients with dehydration, as suggested by a low cardiac index and central venous pressure at baseline and by high urinary osmolality, creatinine concentration and specific gravity. Preloading and the hemodynamic response to induction did not correlate with fluid responsiveness. The blood volume expanded 2.3 (starch) and 1.8 (Ringer's) times over the infused volume. Fluid volume optimization did not induce a hyperkinetic state but ameliorated the decrease in stroke volume caused by anesthesia. Dehydration, but not the hemodynamic response to the induction, was correlated with fluid responsiveness.
NASA Astrophysics Data System (ADS)
Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard
2015-01-01
In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.
Dose to mass for evaluation and optimization of lung cancer radiation therapy.
Tyler Watkins, William; Moore, Joseph A; Hugo, Geoffrey D; Siebers, Jeffrey V
2017-11-01
To evaluate potential organ at risk dose-sparing by using dose-mass-histogram (DMH) objective functions compared with dose-volume-histogram (DVH) objective functions. Treatment plans were retrospectively optimized for 10 locally advanced non-small cell lung cancer patients based on DVH and DMH objectives. DMH-objectives were the same as DVH objectives, but with mass replacing volume. Plans were normalized to dose to 95% of the PTV volume (PTV-D95v) or mass (PTV-D95m). For a given optimized dose, DVH and DMH were intercompared to ascertain dose-to-volume vs. dose-to-mass differences. Additionally, the optimized doses were intercompared using DVH and DMH metrics to ascertain differences in optimized plans. Mean dose to volume, D v ‾, mean dose to mass, D M ‾, and fluence maps were intercompared. For a given dose distribution, DVH and DMH differ by >5% in heterogeneous structures. In homogeneous structures including heart and spinal cord, DVH and DMH are nearly equivalent. At fixed PTV-D95v, DMH-optimization did not significantly reduce dose to OARs but reduced PTV-D v ‾ by 0.20±0.2Gy (p=0.02) and PTV-D M ‾ by 0.23±0.3Gy (p=0.02). Plans normalized to PTV-D95m also result in minor PTV dose reductions and esophageal dose sparing (D v ‾ reduced 0.45±0.5Gy, p=0.02 and D M ‾ reduced 0.44±0.5Gy, p=0.02) compared to DVH-optimized plans. Optimized fluence map comparisons indicate that DMH optimization reduces dose in the periphery of lung PTVs. DVH- and DMH-dose indices differ by >5% in lung and lung target volumes for fixed dose distributions, but optimizing DMH did not reduce dose to OARs. The primary difference observed in DVH- and DMH-optimized plans were variations in fluence to the periphery of lung target PTVs, where low density lung surrounds tumor. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Xiaojun; Zhuang, Jingshun; Fu, Yingjuan; Tian, Guoyu; Wang, Zhaojiang; Qin, Menghua
2016-04-01
A combined process of lime treatment and mixed bed ion exchange was proposed to separate hemicellulose-derived saccharides (HDS) from prehydrolysis liquor (PHL) of lignocellulose as value added products. The optimization of lime treatment achieved up to 44.2% removal of non-saccharide organic compounds (NSOC), mainly colloidal substances, with negligible HDS degradation at 0.5% lime level and subsequent neutralization by phosphoric acid. The residual NSOC and calcium ions in lime-treated PHL were eliminated by mixed bed ion exchange. The breakthrough curves of HDS and NSOC showed selective retention toward NSOC, leading to 75% HDS recovery with 95% purity at 17 bed volumes of exchange capacity. In addition, macroporous resin showed higher exchange capacity than gel resin as indicated by the triple processing volume. The remarkable selectivity of the combined process suggested the feasibility for HDS separation from PHL. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal ventilation of the anesthetized pediatric patient.
Feldman, Jeffrey M
2015-01-01
Mechanical ventilation of the pediatric patient is challenging because small changes in delivered volume can be a significant fraction of the intended tidal volume. Anesthesia ventilators have traditionally been poorly suited to delivering small tidal volumes accurately, and pressure-controlled ventilation has become used commonly when caring for pediatric patients. Modern anesthesia ventilators are designed to deliver small volumes accurately to the patient's airway by compensating for the compliance of the breathing system and delivering tidal volume independent of fresh gas flow. These technology advances provide the opportunity to implement a lung-protective ventilation strategy in the operating room based upon control of tidal volume. This review will describe the capabilities of the modern anesthesia ventilator and the current understanding of lung-protective ventilation. An optimal approach to mechanical ventilation for the pediatric patient is described, emphasizing the importance of using bedside monitors to optimize the ventilation strategy for the individual patient.
Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa
2013-04-01
This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total spent time, 43% for the fuel consumption and 40% for the emitted pollutants. This results in total cost savings of 57%, labor being the greatest contributor, representing over €11,000 per year for the two vehicles collecting glass-waste. Moreover, it is shown herein that the dynamic loading process of the collection vehicle impacts on both the fuel consumption and on pollutant emissions. Copyright © 2012 Elsevier Ltd. All rights reserved.
2009-10-01
122 viii FOREWARD This report represents a portion of the total work conducted under Contract No. FA8650-04-D-3446-25 for the Wright...applied to improve fatigue and corrosion properties of metals. The ability to use a high energy laser pulse to generate shock waves, inducing a...Laser Peening (LP). In the LP process, favorable residual stresses are induced on a surface to improve fatigue and fretting properties of metals. In
Harmony search optimization for HDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Panchal, Aditya
In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was implemented in this thesis was within 2% of the values computed by Varian BrachyVision for the prostate, within 3% for the rectum and bladder and 6% for the urethra. The calculation of dose compared to BrachyVision was determined to be different by only 0.38%. Isodose curves were also generated and were found to be similar to BrachyVision. The comparison between Harmony Search and genetic algorithm showed that Harmony Search was over 4 times faster when compared over multiple data sets. The optimal Harmony Memory Size was found to be 5 or lower; the Harmony Memory Considering Rate was determined to be 0.95, and the Pitch Adjusting Rate was found to be 0.9. Ultimately, the effect of multithreading showed that as intensive computations such as optimization and dose calculation are involved, the threads of execution scale with the number of processors, achieving a speed increase proportional to the number of processor cores. In conclusion, this work showed that Harmony Search is a viable alternative to existing algorithms for use in HDR prostate brachytherapy optimization. Coupled with the optimal parameters for the algorithm and a multithreaded simulation, this combination has the capability to significantly decrease the time spent on minimizing optimization problems in the clinic that are time intensive, such as brachytherapy, IMRT and beam angle optimization.
Automated IMRT planning with regional optimization using planning scripts
Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.
2013-01-01
Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393
Evaluation of whole genome amplified DNA to decrease material expenditure and increase quality.
Bækvad-Hansen, Marie; Bybjerg-Grauholm, Jonas; Poulsen, Jesper B; Hansen, Christine S; Hougaard, David M; Hollegaard, Mads V
2017-06-01
The overall aim of this study is to evaluate whole genome amplification of DNA extracted from dried blood spot samples. We wish to explore ways of optimizing the amplification process, while decreasing the amount of input material and inherently the cost. Our primary focus of optimization is on the amount of input material, the amplification reaction volume, the number of replicates and amplification time and temperature. Increasing the quality of the amplified DNA and the subsequent results of array genotyping is a secondary aim of this project. This study is based on DNA extracted from dried blood spot samples. The extracted DNA was subsequently whole genome amplified using the REPLIg kit and genotyped on the PsychArray BeadChip (assessing > 570,000 SNPs genome wide). We used Genome Studio to evaluate the quality of the genotype data by call rates and log R ratios. The whole genome amplification process is robust and does not vary between replicates. Altering amplification time, temperature or number of replicates did not affect our results. We found that spot size i.e. amount of input material could be reduced without compromising the quality of the array genotyping data. We also showed that whole genome amplification reaction volumes can be reduced by a factor of 4, without compromising the DNA quality. Whole genome amplified DNA samples from dried blood spots is well suited for array genotyping and produces robust and reliable genotype data. However, the amplification process introduces additional noise to the data, making detection of structural variants such as copy number variants difficult. With this study, we explore ways of optimizing the amplification protocol in order to reduce noise and increase data quality. We found, that the amplification process was very robust, and that changes in amplification time or temperature did not alter the genotyping calls or quality of the array data. Adding additional replicates of each sample also lead to insignificant changes in the array data. Thus, the amount of noise introduced by the amplification process was consistent regardless of changes made to the amplification protocol. We also explored ways of decreasing material expenditure by reducing the spot size or the amplification reaction volume. The reduction did not affect the quality of the genotyping data.
Study of aerodynamic surface control of space shuttle boost and reentry, volume 1
NASA Technical Reports Server (NTRS)
Chang, C. J.; Connor, C. L.; Gill, G. P.
1972-01-01
The optimization technique is described which was used in the study for applying modern optimal control technology to the design of shuttle booster engine reaction control systems and aerodynamic control systems. Complete formulations are presented for both the ascent and reentry portions of the study. These formulations include derivations of the 6D perturbation equations of motion and the process followed in the control and blending law selections. A total hybrid software concept applied to the study is described in detail. Conclusions and recommendations based on the results of the study are included.
Coverage-based constraints for IMRT optimization
NASA Astrophysics Data System (ADS)
Mescher, H.; Ulrich, S.; Bangert, M.
2017-09-01
Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.
A grid generation system for multi-disciplinary design optimization
NASA Technical Reports Server (NTRS)
Jones, William T.; Samareh-Abolhassani, Jamshid
1995-01-01
A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.
Deng, Hui; Zhang, Genlin; Xu, Xiaolin; Tao, Guanghui; Dai, Jiulei
2010-10-15
The preparation of activated carbon (AC) from cotton stalk was investigated in this paper. Orthogonal array experimental design method was used to optimize the preparation of AC using microwave assisted phosphoric acid. Optimized parameters were radiation power of 400 W, radiation time of 8 min, concentration of phosphoric acid of 50% by volume and impregnation time of 20 h, respectively. The surface characteristics of the AC prepared under optimized condition were examined by pore structure analysis, scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). Pore structure analysis shows that mecropores constitute more of the porosity of the prepared AC. Compared to cotton stalk, different functionalities and morphology on the carbon surfaces were formed in the prepared process. The adsorption capacity of the AC was also investigated by removing methylene blue (MB) in aqueous solution. The equilibrium data of the adsorption was well fitted to the Langmuir isotherm. The maximum adsorption capacity of MB on the prepared AC is 245.70 mg/g. The adsorption process follows the pseudo-second-order kinetic model. 2010 Elsevier B.V. All rights reserved.
Concentrating phenolic acids from Lonicera japonica by nanofiltration technology
NASA Astrophysics Data System (ADS)
Li, Cunyu; Ma, Yun; Li, Hongyang; Peng, Guoping
2017-03-01
Response surface analysis methodology was used to optimize the concentrate process of phenolic acids from Lonicera japonica by nanofiltration technique. On the basis of the influences of pressure, temperature and circulating volume, the retention rate of neochlorogenic acid, chlorogenic acid and 4-dicaffeoylquinic acid were selected as index, molecular weight cut-off of nanofiltration membrane, concentration and pH were selected as influencing factors during concentrate process. The experiment mathematical model was arranged according to Box-Behnken central composite experiment design. The optimal concentrate conditions were as following: nanofiltration molecular weight cut-off, 150 Da; solutes concentration, 18.34 µg/mL; pH, 4.26. The predicted value of retention rate was 97.99% under the optimum conditions, and the experimental value was 98.03±0.24%, which was in accordance with the predicted value. These results demonstrate that the combination of Box-Behnken design and response surface analysis can well optimize the concentrate process of Lonicera japonica water-extraction by nanofiltration, and the results provide the basis for nanofiltration concentrate for heat-sensitive traditional Chinese medicine.
Information Sciences Assessment for Asia and Australasia
2009-10-16
entertainment and home services - Machine Translation for international cooperation - NLU + Affective Computing for education - Intelligent Optimization for...into an emotion. ETTS, embedded Mandarin, music retrieval. Also, research in areas of computer graphics, digital media processing Intelligent...many from outside China, 40% in phase 2 Sales volume in 2007 130 * 100 million RMB SAP (1st), CITI, AIG, EDS, Capgemini, ILOG, Infosys, HCL, Sony
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol
This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.
Optimal synthesis and characterization of Ag nanofluids by electrical explosion of wires in liquids
2011-01-01
Silver nanoparticles were produced by electrical explosion of wires in liquids with no additive. In this study, we optimized the fabrication method and examined the effects of manufacturing process parameters. Morphology and size of the Ag nanoparticles were determined using transmission electron microscopy and field-emission scanning electron microscopy. Size and zeta potential were analyzed using dynamic light scattering. A response optimization technique showed that optimal conditions were achieved when capacitance was 30 μF, wire length was 38 mm, liquid volume was 500 mL, and the liquid type was deionized water. The average Ag nanoparticle size in water was 118.9 nm and the zeta potential was -42.5 mV. The critical heat flux of the 0.001-vol.% Ag nanofluid was higher than pure water. PMID:21711757
NASA Astrophysics Data System (ADS)
Chen, CHAI; Yiik Diew, WONG
2017-02-01
This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.
Optimization of photo-Fenton process for the treatment of prednisolone.
Díez, Aida María; Ribeiro, Ana Sofia; Sanromán, Maria Angeles; Pazos, Marta
2018-03-29
Prednisolone is a widely prescribed synthetic glucocorticoid and stated to be toxic to a number of non-target aquatic organisms. Its extensive consumption generates environmental concern due to its detection in wastewater samples at concentrations ranged from ng/L to μg/L that requests the application of suitable degradation processes. Regarding the actual treatment options, advanced oxidation processes (AOPs) are presented as a viable alternative. In this work, the comparison in terms of pollutant removal and energetic efficiencies, between different AOPs such as Fenton (F), photo-Fenton (UV/F), photolysis (UV), and hydrogen peroxide/photolysis (UV/H 2 O 2 ), was carried out. Light diode emission (LED) was the selected source to provide the UV radiation. The UV/F process revealed the best performance, reaching high levels of both degradation and mineralization with low energy consumption. Its optimization was conducted and the operational parameters were iron and H 2 O 2 concentrations and the working volume. Using the response surface methodology with the Box-Behnken design, the effect of independent variables and their interactions on the process response were effectively evaluated. Different responses were analyzed taking into account the prednisolone removal (TOC and drug abatements) and the energy consumptions associated. The obtained model showed an improvement of the UV/F process when treating smaller volumes and when adding high concentrations of H 2 O 2 and Fe 2+ . The validation of this model was successfully carried out, having only 5% of discrepancy between the model and the experimental results. Finally, the performance of the process when having a real wastewater matrix was also tested, achieving complete mineralization and detoxification after 8 h. In addition, prednisolone degradation products were identified. Finally, the obtained low energy permitted to confirm the viability of the process.
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
Even-Or, Ehud; Di Mola, Maria; Ali, Muhammad; Courtney, Sarah; McDougall, Elizabeth; Alexander, Sarah; Schechter, Tal; Whitlock, James A; Licht, Christoph; Krueger, Joerg
2017-06-01
The manufacturing of cellular products for immunotherapy, such as chimeric antigen receptor T cells, requires successful collection of mononuclear cells. Collections from children with high-risk leukemia present a challenge, especially because the established COBE Spectra apheresis device is being replaced by the novel Spectra Optia device (Optia) in many institutions. Published experience for mononuclear cell collections in children with Optia is lacking. Our aim was to compare the two collection devices and describe modified settings on the Optia to optimize mononuclear cell collections. As a quality initiative, we retrospectively collected and compared data from mononuclear cell collections on both devices. Collected data included patient's clinical characteristics; collection parameters, including precollection lymphocyte/CD3 counts, total blood volumes processed, runtimes, and side effects (including complete blood count and electrolyte changes); and product characteristics, including volumes and cell counts. Collection efficiencies and collection ratios were calculated. Twenty-six mononuclear cell collections were performed on 20 pediatric patients: 11 with COBE and 15 with Optia. Adequate mononuclear cell products were successfully collected with a single procedure from all patients except one, with mean calculated mononuclear cell collection efficiency that was significantly higher from Optia collections compared with COBE collections (57.9 ± 4.6% vs 40.3 ± 6.2%, respectively; p = 0.04). CD3-positive yields were comparable on both machines (p = 0.34) with significantly smaller blood volumes processed on Optia. Collected products had larger volumes on Optia. No significant side effects attributed to the procedure were noted. Mononuclear cell apheresis using the Optia device in children is more efficient and is as safe as that with the COBE device. © 2017 AABB.
Quintero, Catherine; Kariv, Ilona
2009-06-01
To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.
Epoxidized Natural Rubber/Chitosan Network Binder for Silicon Anode in Lithium-Ion Battery.
Lee, Sang Ha; Lee, Jeong Hun; Nam, Dong Ho; Cho, Misuk; Kim, Jaehoon; Chanthad, Chalathorn; Lee, Youngkwan
2018-05-16
Polymeric binder is extremely important for Si-based anode in lithium-ion batteries due to large volume variation during charging/discharging process. Here, natural rubber-incorporated chitosan networks were designed as a binder material to obtain both adhesion and elasticity. Chitosan could strongly anchor Si particles through hydrogen bonding, while the natural rubber could stretch reversibly during the volume variation of Si particles, resulting in high cyclic performance. The prepared electrode exhibited the specific capacities of 1350 mAh/g after 1600 cycles at the current density of 8 A/g and 2310 mAh/g after 500 cycles at the current density of 1 A/g. Furthermore, the cycle test with limiting lithiation capacity was conducted to study the optimal binder properties at varying degree of the volume expansion of silicon, and it was found that the elastic property of binder material was strongly required when the large volume expansion of Si occurred.
NASA Astrophysics Data System (ADS)
Szczepanik, M.; Poteralski, A.
2016-11-01
The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.
Modulation of red cell mass by neocytolysis in space and on Earth
NASA Technical Reports Server (NTRS)
Rice, L.; Alfrey, C. P.
2000-01-01
Astronauts predictably experience anemia after return from space. Upon entering microgravity, the blood volume in the extremities pools centrally and plasma volume decreases, causing plethora and erythropoietin suppression. There ensues neocytolysis, selective hemolysis of the youngest circulating red cells, allowing rapid adaptation to the space environment but becoming maladaptive on re-entry to a gravitational field. The existence of this physiologic control process was confirmed in polycythemic high-altitude dwellers transported to sea level. Pathologic neocytolysis contributes to the anemia of renal failure. Understanding the process has implications for optimizing erythropoietin-dosing schedules and the therapy of other human disorders. Human and rodent models of neocytolysis are being created to help find out how interactions between endothelial cells, reticuloendothelial phagocytes and young erythrocytes are altered, and to shed light on the expression of surface adhesion molecules underlying this process. Thus, unraveling a problem for space travelers has uncovered a physiologic process controlling the red cell mass that can be applied to human disorders on Earth.
Application of a 2-step process for the biological treatment of sulfidic spent caustics.
de Graaff, Marco; Klok, Johannes B M; Bijmans, Martijn F M; Muyzer, Gerard; Janssen, Albert J H
2012-03-01
This research demonstrates the feasibility and advantages of a 2-step process for the biological treatment of sulfidic spent caustics under halo-alkaline conditions (i.e. pH 9.5; Na(+) = 0.8 M). Experiments with synthetically prepared solutions were performed in a continuously fed system consisting of two gas-lift reactors in series operated at aerobic conditions at 35 °C. The detoxification of sulfide to thiosulfate in the first step allowed the successful biological treatment of total-S loading rates up to 33 mmol L(-1) day(-1). In the second, biological step, the remaining sulfide and thiosulfate was completely converted to sulfate by haloalkaliphilic sulfide oxidizing bacteria. Mathematical modeling of the 2-step process shows that under the prevailing conditions an optimal reactor configuration consists of 40% 'abiotic' and 60% 'biological' volume, whilst the total reactor volume is 22% smaller than for the 1-step process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Niasar, Hojatallah Seyedy; Li, Hanning; Das, Sreejon; Kasanneni, Tirumala Venkateswara Rao; Ray, Madhumita B; Xu, Chunbao Charles
2018-04-01
This study employed Box-Behnken design and response surface methodology to optimize activation parameters for the production of activated petroleum coke (APC) adsorbent from petroleum coke (PC) to achieve highest adsorption capacity for three model naphthenic acids. Activated petroleum coke (APC) adsorbent with a BET surface area of 1726 m 2 /g and total pore volume of 0.85 cc/g was produced at the optimum activation conditions (KOH/coke mass ratio) of 3.0, activation temperature 790 °C, and activation time 3.47 h). Effects of the activation parameters on the adsorption pefromances (adsortion capaciy and kinetics) were investigated. With the APC obtained at the optimum activation condition, the maximum adsorption capacity of 451, 362, and 320 (mg/g) was achieved for 2-naphthoic acid, diphenylacetic acid and cyclohexanepentanoic acid (CP), respectively. Although, generally APC adsorbents with a higher specific surface area and pore volume provide better adsorption capacity, the textural properties (surface areas and pore volume) are not the only parameters determining the APC adsorbents' adsorption capacity. Other parameters such as surface functionalities play effective roles on the adsorption capacity of the produced APC adsorbents for NAs. The KOH activation process, in particular the acid washing step, distinctly reduced the sulfur and metals contents in the raw PC, decreasing the leaching potential of metals from APC adsorbents during adsorption. Copyright © 2018 Elsevier Ltd. All rights reserved.
Clinical implementation of stereotaxic brain implant optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenow, U.F.; Wojcicka, J.B.
1991-03-01
This optimization method for stereotaxic brain implants is based on seed/strand configurations of the basic type developed for the National Cancer Institute (NCI) atlas of regular brain implants. Irregular target volume shapes are determined from delineation in a stack of contrast enhanced computed tomography scans. The neurosurgeon may then select up to ten directions, or entry points, of surgical approach of which the program finds the optimal one under the criterion of smallest target volume diameter. Target volume cross sections are then reconstructed in 5-mm-spaced planes perpendicular to the implantation direction defined by the entry point and the target volumemore » center. This information is used to define a closed line in an implant cross section along which peripheral seed strands are positioned and which has now an irregular shape. Optimization points are defined opposite peripheral seeds on the target volume surface to which the treatment dose rate is prescribed. Three different optimization algorithms are available: linear least-squares programming, quadratic programming with constraints, and a simplex method. The optimization routine is implemented into a commercial treatment planning system. It generates coordinate and source strength information of the optimized seed configurations for further dose rate distribution calculation with the treatment planning system, and also the coordinate settings for the stereotaxic Brown-Roberts-Wells (BRW) implantation device.« less
Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost
Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor
2009-01-01
Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321
Zhang, Baofeng; Kilburg, Denise; Eastman, Peter; Pande, Vijay S; Gallicchio, Emilio
2017-04-15
We present an algorithm to efficiently compute accurate volumes and surface areas of macromolecules on graphical processing unit (GPU) devices using an analytic model which represents atomic volumes by continuous Gaussian densities. The volume of the molecule is expressed by means of the inclusion-exclusion formula, which is based on the summation of overlap integrals among multiple atomic densities. The surface area of the molecule is obtained by differentiation of the molecular volume with respect to atomic radii. The many-body nature of the model makes a port to GPU devices challenging. To our knowledge, this is the first reported full implementation of this model on GPU hardware. To accomplish this, we have used recursive strategies to construct the tree of overlaps and to accumulate volumes and their gradients on the tree data structures so as to minimize memory contention. The algorithm is used in the formulation of a surface area-based non-polar implicit solvent model implemented as an open source plug-in (named GaussVol) for the popular OpenMM library for molecular mechanics modeling. GaussVol is 50 to 100 times faster than our best optimized implementation for the CPUs, achieving speeds in excess of 100 ns/day with 1 fs time-step for protein-sized systems on commodity GPUs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.
2012-12-01
Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Brett W., E-mail: coxb@mskcc.org; Spratt, Daniel E.; Lovelock, Michael
2012-08-01
Purpose: Spinal stereotactic radiosurgery (SRS) is increasingly used to manage spinal metastases. However, target volume definition varies considerably and no consensus target volume guidelines exist. This study proposes consensus target volume definitions using common scenarios in metastatic spine radiosurgery. Methods and Materials: Seven radiation oncologists and 3 neurological surgeons with spinal radiosurgery expertise independently contoured target and critical normal structures for 10 cases representing common scenarios in metastatic spine radiosurgery. Each set of volumes was imported into the Computational Environment for Radiotherapy Research. Quantitative analysis was performed using an expectation maximization algorithm for Simultaneous Truth and Performance Level Estimation (STAPLE)more » with kappa statistics calculating agreement between physicians. Optimized confidence level consensus contours were identified using histogram agreement analysis and characterized to create target volume definition guidelines. Results: Mean STAPLE agreement sensitivity and specificity was 0.76 (range, 0.67-0.84) and 0.97 (range, 0.94-0.99), respectively, for gross tumor volume (GTV) and 0.79 (range, 0.66-0.91) and 0.96 (range, 0.92-0.98), respectively, for clinical target volume (CTV). Mean kappa agreement was 0.65 (range, 0.54-0.79) for GTV and 0.64 (range, 0.54-0.82) for CTV (P<.01 for GTV and CTV in all cases). STAPLE histogram agreement analysis identified optimal consensus contours (80% confidence limit). Consensus recommendations include that the CTV should include abnormal marrow signal suspicious for microscopic invasion and an adjacent normal bony expansion to account for subclinical tumor spread in the marrow space. No epidural CTV expansion is recommended without epidural disease, and circumferential CTVs encircling the cord should be used only when the vertebral body, bilateral pedicles/lamina, and spinous process are all involved or there is extensive metastatic disease along the circumference of the epidural space. Conclusions: This report provides consensus guidelines for target volume definition for spinal metastases receiving upfront SRS in common clinical situations.« less
DOT National Transportation Integrated Search
1988-10-01
This annotated bibliography, Volume III of the study entitled, Optimizing Wartime Materiel Delivery: An overview of DOD Containerization Efforts, documents studies related to containerization. Several objectives of the study were defined. These inclu...
Hwang, Seung Hwan; Kwon, Shin Hwa; Wang, Zhiqiang; Kim, Tae Hyun; Kang, Young-Hee; Lee, Jae-Yong; Lim, Soon Sung
2016-08-26
Protein tyrosine phosphatase expressed in insulin-sensitive tissues (such as liver, muscle, and adipose tissue) has a key role in the regulation of insulin signaling and pathway activation, making protein tyrosine phosphatase a promising target for the treatment of type 2 diabetes mellitus and obesity and response surface methodology (RSM) is an effective statistical technique for optimizing complex processes using a multi-variant approach. In this study, Zea mays L. (Purple corn kernel, PCK) and its constituents were investigated for protein tyrosine phosphatase 1β (PTP1β) inhibitory activity including enzyme kinetic study and to improve total yields of anthocyanins and polyphenols, four extraction parameters, including temperature, time, solid-liquid ratio, and solvent volume, were optimized by RSM. Isolation of seven polyphenols and five anthocyanins was achieved by PTP1β assay. Among them, cyanidin-3-(6"malonylglucoside) and 3'-methoxyhirsutrin showed the highest PTP1β inhibition with IC50 values of 54.06 and 64.04 μM, respectively and 4.52 mg gallic acid equivalent/g (GAE/g) of total polyphenol content (TPC) and 43.02 mg cyanidin-3-glucoside equivalent/100 g (C3GE/100g) of total anthocyanin content (TAC) were extracted at 40 °C for 8 h with a 33 % solid-liquid ratio and a 1:15 solvent volume. Yields were similar to predictions of 4.58 mg GAE/g of TPC and 42.28 mg C3GE/100 g of TAC. These results indicated that PCK and 3'-methoxyhirsutrin and cyanidin-3-(6"malonylglucoside) might be active natural compounds and could be apply by optimizing of extraction process using response surface methodology.
Fast Raman single bacteria identification: toward a routine in-vitro diagnostic
NASA Astrophysics Data System (ADS)
Douet, Alice; Josso, Quentin; Marchant, Adrien; Dutertre, Bertrand; Filiputti, Delphine; Novelli-Rousseau, Armelle; Espagnon, Isabelle; Kloster-Landsberg, Meike; Mallard, Frédéric; Perraut, Francois
2016-04-01
Timely microbiological results are essential to allow clinicians to optimize the prescribed treatment, ideally at the initial stage of the therapeutic process. Several approaches have been proposed to solve this issue and to provide the microbiological result in a few hours directly from the sample such as molecular biology. However fast and sensitive those methods are not based on single phenotypic information which presents several drawbacks and limitations. Optical methods have the advantage to allow single-cell sensitivity and to probe the phenotype of measured cells. Here we present a process and a prototype that allow automated single-bacteria phenotypic analysis. This prototype is based on the use of Digital In-line Holography techniques combined with a specially designed Raman spectrometer using a dedicated device to capture bacteria. The localization of single-cell is finely determined by using holograms and a proper propagation kernel. Holographic images are also used to analyze bacteria in the sample to sort potential pathogens from flora dwelling species or other biological particles. This accurate localization enables the use of a small confocal volume adapted to the measurement of single-cell. Along with the confocal volume adaptation, we also have modified every components of the spectrometer to optimize single-bacteria Raman measurements. This optimization allowed us to acquire informative single-cell spectra using an integration time of 0.5s only. Identification results obtained with this prototype are presented based on a 65144 Raman spectra database acquired automatically on 48 bacteria strains belonging to 8 species.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
[Preparation procedures of anti-complementary polysaccharides from Houttuynia cordata].
Zhang, Juanjuan; Lu, Yan; Chen, Daofeng
2012-07-01
To establish and optimize the preparation procedures of the anti-complementary polysaccharides from Houttuynia cordata. Based on the yield and anti-complementary activity in vitro, the conditions of extraction and alcohol precipitating process were optimized by orthogonal tests. The optimal condition of deproteinization was determined according to the results of protein removed and polysaccharide maintained. The best decoloring method was also optimized by orthogonal experimental design. The optimized preparation procedures were given as follows: extract the coarse powder 3 times with 50 times volume of water at 90 degrees C for 2 hours every time, combine the extracts and concentrate appropriately, equivalent to 0.12 g of H. cordata per milliliter. Add 4 times volume of 90% ethanol to the extract, allow to stand for 24 hours to precipitate totally, filter and the precipitate was successfully washed with anhydrous alcohol, acetone and anhydrous ether. Resolve the residue with water, add trichloroacetic acid (TCA) to a concentration of 20% to remove protein. Decoloration was at a concentration of 3% with activated carbon at pH 3.0, 50 degrees C for 50 min. The above procedures above were tested 3 times, resulting in the average yield of polysaccharides at 4.03% (RSD 0.96%), the average concentrations of polysaccharides and protein at 80.97% (RSD 1.5%) and 2.02% (RSD 2.3%), and average CH50 at 0.079 g x L-(-1) (RSD 3.6%). The established and optimized procedures are repeatable and reliable to prepare the anti-complementary polysaccharides with high quality and activity from H. cordata.
Effects of annealing temperature on the H2-sensing properties of Pd-decorated WO3 nanorods
NASA Astrophysics Data System (ADS)
Lee, Sangmin; Lee, Woo Seok; Lee, Jae Kyung; Hyun, Soong Keun; Lee, Chongmu; Choi, Seungbok
2018-03-01
The temperature of the post-annealing treatment carried out after noble metal deposition onto semiconducting metal oxides (SMOs) must be carefully optimized to maximize the sensing performance of the metal-decorated SMO sensors. WO3 nanorods were synthesized by thermal evaporation of WO3 powders and decorated with Pd nanoparticles using a sol-gel method, followed by an annealing process. The effects of the annealing temperature on the hydrogen gas-sensing properties of the Pd-decorated WO3 nanorods were then examined; the optimal annealing temperature, leading to the highest response of the WO3 nanorod sensor to H2, was determined to be 600 °C. Post-annealing at 600 °C resulted in nanorods with the highest surface area-to-volume ratio, as well as in the optimal size and the largest number of deposited Pd nanoparticles, leading to the highest response and the shortest response/recovery times toward H2. The improved H2-sensing performance of the Pd-decorated WO3 nanorod sensor, compared to a sensor based on pristine WO3 nanorods, is attributed to the enhanced catalytic activity, increased surface area-to-volume ratio, and higher amounts of surface defects.
Prilezhaev dihydroxylation of olefins in a continuous flow process.
van den Broek, Bas A M W; Becker, René; Kössl, Florian; Delville, Mariëlle M E; Nieuwland, Pieter J; Koch, Kaspar; Rutjes, Floris P J T
2012-02-13
Epoxidation of both terminal and non-terminal olefins with peroxy acids is a well-established and powerful tool in a wide variety of chemical processes. In an additional step, the epoxide can be readily converted into the corresponding trans-diol. Batch-wise scale-up, however, is often troublesome because of the thermal instability and explosive character of the peroxy acids involved. This article describes the design and semi-automated optimization of a continuous flow process and subsequent scale-up to preparative production volumes in an intrinsically safe manner. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1994-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.
NASA Astrophysics Data System (ADS)
Mangal, S. K.; Sharma, Vivek
2018-02-01
Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.
Wu, Kang; Ding, Lijian; Zhu, Peng; Li, Shuang; He, Shan
2018-04-22
The aim of this study was to determine the cumulative effect of fermentation parameters and enhance the production of docosahexaenoic acid (DHA) by Thraustochytrium sp. ATCC 26185 using response surface methodology (RSM). Among the eight variables screened for effects of fermentation parameters on DHA production by Plackett-Burman design (PBD), the initial pH, inoculum volume, and fermentation volume were found to be most significant. The Box-Behnken design was applied to derive a statistical model for optimizing these three fermentation parameters for DHA production. The optimal parameters for maximum DHA production were initial pH: 6.89, inoculum volume: 4.16%, and fermentation volume: 140.47 mL, respectively. The maximum yield of DHA production was 1.68 g/L, which was in agreement with predicted values. An increase in DHA production was achieved by optimizing the initial pH, fermentation, and inoculum volume parameters. This optimization strategy led to a significant increase in the amount of DHA produced, from 1.16 g/L to 1.68 g/L. Thraustochytrium sp. ATCC 26185 is a promising resource for microbial DHA production due to the high-level yield of DHA that it produces, and the capacity for large-scale fermentation of this organism.
Wu, Lijie; Song, Ying; Hu, Mingzhu; Xu, Xu; Zhang, Hanqi; Yu, Aimin; Ma, Qiang; Wang, Ziming
2015-03-01
A simple and efficient integrated microwave processing system (IMPS) was firstly assembled and validated for the extraction of organophosphorus pesticides in fresh vegetables. Two processes under microwave irradiation, dynamic microwave-assisted extraction (DMAE) and microwave-accelerated solvent elution (MASE), were integrated for simplifying the pretreatment of the sample. Extraction, separation, enrichment and elution were finished in a simple step. The organophosphorus pesticides were extracted from the fresh vegetables into hexane with DMAE, and then the extract was directly introduced into the enrichment column packed with active carbon fiber (ACF). Subsequently, the organophosphorus pesticides trapped on the ACF were eluted with ethyl acetate under microwave irradiation. No further filtration or cleanup was required before analysis of the eluate by gas chromatography-mass spectrometry. Some experimental parameters affecting extraction efficiency were investigated and optimized, such as microwave output power, kind and volume of extraction solvent, extraction time, amount of sorbent, elution microwave power, kind and volume of elution solvent, elution solvent flow rate. Under the optimized conditions, the recoveries were in the range of 71.5-105.2%, and the relative standard deviations were lower than 11.6%. The experiment results prove that the present method is a simple and effective sample preparation method for the determination of pesticides in solid samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Lee, G H; Hur, W; Bremmon, C E; Flickinger, M C
1996-03-20
A simulation was developed based on experimental data obtained in a 14-L reactor to predict the growth and L-lysine accumulation kinetics, and change in volume of a large-scale (250-m(3)) Bacillus methanolicus methanol-based process. Homoserine auxotrophs of B. methanolicus MGA3 are unique methylotrophs because of the ability to secrete lysine during aerobic growth and threonine starvation at 50 degrees C. Dissolved methanol (100 mM), pH, dissolved oxygen tension (0.063 atm), and threonine levels were controlled to obtain threonine-limited conditions and high-cell density (25 g dry cell weight/L) in a 14-L reactor. As a fed-batch process, the additions of neat methanol (fed on demand), threonine, and other nutrients cause the volume of the fermentation to increase and the final lysine concentration to decrease. In addition, water produced as a result of methanol metabolism contributes to the increase in the volume of the reactor. A three-phase approach was used to predict the rate of change of culture volume based on carbon dioxide production and methanol consumption. This model was used for the evaluation of volume control strategies to optimize lysine productivity. A constant volume reactor process with variable feeding and continuous removal of broth and cells (VF(cstr)) resulted in higher lysine productivity than a fed-batch process without volume control. This model predicts the variation in productivity of lysine with changes in growth and in specific lysine productivity. Simple modifications of the model allows one to investigate other high-lysine-secreting strains with different growth and lysine productivity characteristics. Strain NOA2#13A5-2 which secretes lysine and other end-products were modeled using both growth and non-growth-associated lysine productivity. A modified version of this model was used to simulate the change in culture volume of another L-lysine producing mutant (NOA2#13A52-8A66) with reduced secretion of end-products. The modified simulation indicated that growth-associated production dominates in strain NOA2#13A52-8A66. (c) 1996 John Wiley & Sons, Inc.
System Engineering Concept Demonstration, Effort Summary. Volume 1
1992-12-01
involve only the system software, user frameworks and user tools. U •User Tool....s , Catalyst oExternal 00 Computer Framwork P OSystems • •~ Sysytem...analysis, synthesis, optimization, conceptual design of Catalyst. The paper discusses the definition, design, test, and evaluation; operational concept...This approach will allow system engineering The conceptual requirements for the Process Model practitioners to recognize and tailor the model. This
Benefits Analysis of Past Projects. Volume 2. Individual Project Assessments.
1984-11-01
209 S- ..-...-......... a nineteenth century one which had been developed for .he braiding of fire hoses . Project Results The program revealed...was found for protecting the drilling and position sensing optics from expelled metal particles. Process and work-material variables were optimized...HPT vane material. Hastelloy X is a nickel-chromium superalloy used in high temperature sheet metal applications, such as combustion liners and
Impact of database quality in knowledge-based treatment planning for prostate cancer.
Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D
2018-03-13
This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P < .001), whereas CPD predictions for rectum D 10 (P = .005) and D 30 (P < .001) were significantly less than MCOD predictions. KBP predictions were statistically achievable in the replans for all predicted dose-volumes, excluding D 10 of bladder (P = .03) and rectum (P = .04). Compared with clinical plans, replans showed significant average reductions in D mean for bladder (7.8 Gy; P < .001) and rectum (9.4 Gy; P < .001), while maintaining statistically similar planning target volume, femoral head, and penile bulb dose. KBP dose-volume predictions derived from Pareto plans were more optimal overall than those resulting from manually optimized clinical plans, which significantly improved KBP-assisted plan quality. This work investigates how the plan quality of knowledge databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Dosimetric evaluation of total marrow irradiation using 2 different planning systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nalichowski, Adrian, E-mail: nalichoa@karmanos.org; Eagle, Don G.; Burmeister, Jay
This study compared 2 different treatment planning systems (TPSs) for quality and efficiency of total marrow irradiation (TMI) plans. The TPSs used in this study were VOxel-Less Optimization (VoLO) (Accuray Inc, Sunnyvale, CA) using helical dose delivery on a Tomotherapy Hi-Art treatment unit and Eclipse (Varian Medical Systems Inc, Palo Alto, CA) using volumetric modulated arc therapy (VMAT) dose delivery on a Varian iX treatment unit. A total dose of 1200 cGy was prescribed to cover 95% of the planning target volume (PTV). The plans were optimized and calculated based on a single CT data and structure set using themore » Alderson Rando phantom (The Phantom Laboratory, Salem, NY) and physician contoured target and organ at risk (OAR) volumes. The OARs were lungs, heart, liver, kidneys, brain, and small bowel. The plans were evaluated based on plan quality, time to optimize the plan and calculate the dose, and beam on time. The resulting mean and maximum doses to the PTV were 1268 and 1465 cGy for VoLO and 1284 and 1541 cGy for Eclipse, respectively. For 5 of 6 OAR structures the VoLO system achieved lower mean and D10 doses ranging from 22% to 52% and 3% to 44%, respectively. Total computational time including only optimization and dose calculation were 0.9 hours for VoLO and 3.8 hours for Eclipse. These times do not include user-dependent target delineation and field setup. Both planning systems are capable of creating high-quality plans for total marrow irradiation. The VoLO planning system was able to achieve more uniform dose distribution throughout the target volume and steeper dose fall off, resulting in superior OAR sparing. VoLO's graphics processing unit (GPU)–based optimization and dose calculation algorithm also allowed much faster creation of TMI plans.« less
Optimization-based mesh correction with volume and convexity constraints
D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...
2016-02-24
In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less
NASA Astrophysics Data System (ADS)
Chairunnisak, A.; Arifin, B.; Sofyan, H.; Lubis, M. R.; Darmadi
2018-03-01
This research focuses on the Chemical Oxygen Demand (COD) treatment in palm oil mill effluent by electrocoagulation and electro-Fenton methods to solve it. Initially, the aqueous solution precipitates in acid condition at pH of about two. This study focuses on the palm oil mill effluent degradation by Fe electrodes in a simple batch reactor. This work is conducted by using different parameters such as voltage, electrolyte concentration of NaCl, volume of H2O2 and operation time. The processing of data resulted is by using response surface method coupled with Box-Behnken design. The electrocoagulation method results in the optimum COD reduction of 94.53% from operating time of 39.28 minutes, 20 volts, and without electrolyte concentration. For electro-Fenton process, experiment points out that voltage 15.78 volts, electrolyte concentration 0.06 M and H2O2 volume 14.79 ml with time 35.92 minutes yield 99.56% degradation. The result concluded that the electro-Fenton process was more effective to degrade COD of the palm-oil-mill effluent compared to electrocoagulation process.
Tailoring nanoparticle designs to target cancer based on tumor pathophysiology
Sykes, Edward A.; Dai, Qin; Sarsons, Christopher D.; Chen, Juan; Rocheleau, Jonathan V.; Hwang, David M.; Zheng, Gang; Cramb, David T.; Rinker, Kristina D.; Chan, Warren C. W.
2016-01-01
Nanoparticles can provide significant improvements in the diagnosis and treatment of cancer. How nanoparticle size, shape, and surface chemistry can affect their accumulation, retention, and penetration in tumors remains heavily investigated, because such findings provide guiding principles for engineering optimal nanosystems for tumor targeting. Currently, the experimental focus has been on particle design and not the biological system. Here, we varied tumor volume to determine whether cancer pathophysiology can influence tumor accumulation and penetration of different sized nanoparticles. Monte Carlo simulations were also used to model the process of nanoparticle accumulation. We discovered that changes in pathophysiology associated with tumor volume can selectively change tumor uptake of nanoparticles of varying size. We further determine that nanoparticle retention within tumors depends on the frequency of interaction of particles with the perivascular extracellular matrix for smaller nanoparticles, whereas transport of larger nanomaterials is dominated by Brownian motion. These results reveal that nanoparticles can potentially be personalized according to a patient’s disease state to achieve optimal diagnostic and therapeutic outcomes. PMID:26884153
Near-Optimal Operation of Dual-Fuel Launch Vehicles
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Chou, H. C.; Bowles, J. V.
1996-01-01
A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.
Spatial considerations during cryopreservation of a large volume sample.
Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John
2016-08-01
There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Development of fire shutters based on numerical optimizations
NASA Astrophysics Data System (ADS)
Novak, Ondrej; Kulhavy, Petr; Martinec, Tomas; Petru, Michal; Srb, Pavel
2018-06-01
This article deals with a prototype concept, real experiment and numerical simulation of a layered industrial fire shutter, based on some new insulating composite materials. The real fire shutter has been developed and optimized in laboratory and subsequently tested in the certified test room. A simulation of whole concept has been carried out as the non-premixed combustion process in the commercial final volume sw Pyrosim. Model of the combustion based on a stoichiometric defined mixture of gas and the tested layered samples showed good conformity with experimental results - i.e. thermal distribution inside and heat release rate that has gone through the sample.
Automated geometric optimization for robotic HIFU treatment of liver tumors.
Williamson, Tom; Everitt, Scott; Chauhan, Sunita
2018-05-01
High intensity focused ultrasound (HIFU) represents a non-invasive method for the destruction of cancerous tissue within the body. Heating of targeted tissue by focused ultrasound transducers results in the creation of ellipsoidal lesions at the target site, the locations of which can have a significant impact on treatment outcomes. Towards this end, this work describes a method for the optimization of lesion positions within arbitrary tumors, with specific anatomical constraints. A force-based optimization framework was extended to the case of arbitrary tumor position and constrained orientation. Analysis of the approximate reachable treatment volume for the specific case of treatment of liver tumors was performed based on four transducer configurations and constraint conditions derived. Evaluation was completed utilizing simplified spherical and ellipsoidal tumor models and randomly generated tumor volumes. The total volume treated, lesion overlap and healthy tissue ablated was evaluated. Two evaluation scenarios were defined and optimized treatment plans assessed. The optimization framework resulted in improvements of up to 10% in tumor volume treated, and reductions of up to 20% in healthy tissue ablated as compared to the standard lesion rastering approach. Generation of optimized plans proved feasible for both sub- and intercostally located tumors. This work describes an optimized method for the planning of lesion positions during HIFU treatment of liver tumors. The approach allows the determination of optimal lesion locations and orientations, and can be applied to arbitrary tumor shapes and sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
Purdie, Thomas G; Dinniwell, Robert E; Letourneau, Daniel; Hill, Christine; Sharpe, Michael B
2011-10-01
To present an automated technique for two-field tangential breast intensity-modulated radiotherapy (IMRT) treatment planning. A total of 158 planned patients with Stage 0, I, and II breast cancer treated using whole-breast IMRT were retrospectively replanned using automated treatment planning tools. The tools developed are integrated into the existing clinical treatment planning system (Pinnacle(3)) and are designed to perform the manual volume delineation, beam placement, and IMRT treatment planning steps carried out by the treatment planning radiation therapist. The automated algorithm, using only the radio-opaque markers placed at CT simulation as inputs, optimizes the tangential beam parameters to geometrically minimize the amount of lung and heart treated while covering the whole-breast volume. The IMRT parameters are optimized according to the automatically delineated whole-breast volume. The mean time to generate a complete treatment plan was 6 min, 50 s ± 1 min 12 s. For the automated plans, 157 of 158 plans (99%) were deemed clinically acceptable, and 138 of 158 plans (87%) were deemed clinically improved or equal to the corresponding clinical plan when reviewed in a randomized, double-blinded study by one experienced breast radiation oncologist. In addition, overall the automated plans were dosimetrically equivalent to the clinical plans when scored for target coverage and lung and heart doses. We have developed robust and efficient automated tools for fully inversed planned tangential breast IMRT planning that can be readily integrated into clinical practice. The tools produce clinically acceptable plans using only the common anatomic landmarks from the CT simulation process as an input. We anticipate the tools will improve patient access to high-quality IMRT treatment by simplifying the planning process and will reduce the effort and cost of incorporating more advanced planning into clinical practice. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Viscous Aerodynamic Shape Optimization with Installed Propulsion Effects
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Seidel, Jonathan A.; Rallabhandi, Sriram K.
2017-01-01
Aerodynamic shape optimization is demonstrated to tailor the under-track pressure signature of a conceptual low-boom supersonic aircraft. Primarily, the optimization reduces nearfield pressure waveforms induced by propulsion integration effects. For computational efficiency, gradient-based optimization is used and coupled to the discrete adjoint formulation of the Reynolds-averaged Navier Stokes equations. The engine outer nacelle, nozzle, and vertical tail fairing are axi-symmetrically parameterized, while the horizontal tail is shaped using a wing-based parameterization. Overall, 48 design variables are coupled to the geometry and used to deform the outer mold line. During the design process, an inequality drag constraint is enforced to avoid major compromise in aerodynamic performance. Linear elastic mesh morphing is used to deform volume grids between design iterations. The optimization is performed at Mach 1.6 cruise, assuming standard day altitude conditions at 51,707-ft. To reduce uncertainty, a coupled thermodynamic engine cycle model is employed that captures installed inlet performance effects on engine operation.
Chapdelaine, Isabelle; de Roij van Zuijdewijn, Camiel L.M.; Mostovaya, Ira M.; Lévesque, Renée; Davenport, Andrew; Blankestijn, Peter J.; Wanner, Christoph; Nubé, Menso J.; Grooteman, Muriel P.C.
2015-01-01
In post-dilution online haemodiafiltration (ol-HDF), a relationship has been demonstrated between the magnitude of the convection volume and survival. However, to achieve high convection volumes (>22 L per session) detailed notion of its determining factors is highly desirable. This manuscript summarizes practical problems and pitfalls that were encountered during the quest for high convection volumes. Specifically, it addresses issues such as type of vascular access, needles, blood flow rate, recirculation, filtration fraction, anticoagulation and dialysers. Finally, five of the main HDF systems in Europe are briefly described as far as HDF prescription and optimization of the convection volume is concerned. PMID:25815176
Mahamat, Adoum H; Narducci, Frank A; Schwiegerling, James
2016-03-01
Volume-phase holographic (VPH) gratings have been designed for use in many areas of science and technology, such as optical communication, optical imaging, and astronomy. In this paper, the design of a volume-phase holographic grating, simultaneously optimized to operate in the red, green, and blue wavelengths, is presented along with a study of its fabrication tolerances. The grating is optimized to produce 98% efficiency at λ=532 nm and at least 75% efficiency in the region between 400 and 700 nm, when the incident light is unpolarized. The optimization is done for recording in dichromated gelatin with a thickness of 12 μm, an average refractive index of 1.5, and a refractive index modulation of 0.022.
NASA Technical Reports Server (NTRS)
Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.
1996-01-01
This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.
Overview of CMOS process and design options for image sensor dedicated to space applications
NASA Astrophysics Data System (ADS)
Martin-Gonthier, P.; Magnan, P.; Corbiere, F.
2005-10-01
With the growth of huge volume markets (mobile phones, digital cameras...) CMOS technologies for image sensor improve significantly. New process flows appear in order to optimize some parameters such as quantum efficiency, dark current, and conversion gain. Space applications can of course benefit from these improvements. To illustrate this evolution, this paper reports results from three technologies that have been evaluated with test vehicles composed of several sub arrays designed with some space applications as target. These three technologies are CMOS standard, improved and sensor optimized process in 0.35μm generation. Measurements are focussed on quantum efficiency, dark current, conversion gain and noise. Other measurements such as Modulation Transfer Function (MTF) and crosstalk are depicted in [1]. A comparison between results has been done and three categories of CMOS process for image sensors have been listed. Radiation tolerance has been also studied for the CMOS improved process in the way of hardening the imager by design. Results at 4, 15, 25 and 50 krad prove a good ionizing dose radiation tolerance applying specific techniques.
Spacelab user implementation assessment study. Volume 1: Concept development and evaluation
NASA Technical Reports Server (NTRS)
1975-01-01
The total matrix of alternate Spacelab processing concepts and the rejection rationale utilized to reduce the matrix of 243 alternates to the final candidate processing concepts are developed. The work breakdown structure used for the systematic estimation and compilation of integration and checkout resources is presented along with descriptors of each element. Program models are provided of the space transportation system, the Spacelab, the orbiter, and the ATL that were used as the basis for the study trades, analyses, and optimizations. Resource requirements for all processing concepts are summarized along with the optimizations of the processing concepts. Concept evaluations including flight-rate sensitivities of the GSE, facilities, Spacelab hardware elements, and personnel are delineated. An analysis is presented of the applicability of the candidate concepts to potential spacelab users. The impact of the use of the western test range as an orbiter/spacelab launch site on the candidate processing concepts is evaluated. An assessment of the geographical co-location of experiment, Spacelab, and orbiter-cargo integration is included. Ownership options of the support module/system igloo are discussed.
User's manual for the BNW-II optimization code for dry/wet-cooled power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Bamberger, J.A.; Braun, D.J.
1978-05-01
This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1992-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.
Şakıyan, Özge
2015-05-01
The aim of present work is to optimize the formulation of a functional cake (soy-cake) to be baked in infrared-microwave combination oven. For this optimization process response surface methodology was utilized. It was also aimed to optimize the processing conditions of the combination baking. The independent variables were the baking time (8, 9, 10 min), the soy flour concentration (30, 40, 50 %) and the DATEM (diacetyltartaric acid esters of monoglycerides) concentration (0.4, 0.6 and 0.8 %). The quality parameters that were examined in the study were specific volume, weight loss, total color change and firmness of the cake samples. The results were analyzed by multiple regression; and the significant linear, quadratic, and interaction terms were used in the second order mathematical model. The optimum baking time, soy-flour concentration and DATEM concentration were found as 9.5 min, 30 and 0.72 %, respectively. The corresponding responses of the optimum points were almost comparable with those of conventionally baked soy-cakes. So it may be declared that it is possible to produce high quality soy cakes in a very short time by using infrared-microwave combination oven.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fong, Erika J.; Huang, Chao; Hamilton, Julie
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
2006-11-26
with controlled micro and nanostructure for highly selective, high sensitivity assays. The process was modeled and a procedure for fabricating SERS...small volumes with controlled micro and nanostructure for highly selective, high sensitivity assays. We proved the feasibility of the technique and...films templated by colloidal crystals. The control over the film structure allowed optimizing their performance for potential sensor applications. The
Improving plan quality for prostate volumetric-modulated arc therapy.
Wright, Katrina; Ferrari-Anderson, Janet; Barry, Tamara; Bernard, Anne; Brown, Elizabeth; Lehman, Margot; Pryor, David
2017-01-01
We critically evaluated the quality and consistency of volumetric-modulated arc therapy (VMAT) prostate planning at a single institution to quantify objective measures for plan quality and establish clear guidelines for plan evaluation and quality assurance. A retrospective analysis was conducted on 34 plans generated on the Pinnacle 3 version 9.4 and 9.8 treatment planning system to deliver 78 Gy in 39 fractions to the prostate only using VMAT. Data were collected on contoured structure volumes, overlaps and expansions, planning target volume (PTV) and organs at risk volumes and relationship, dose volume histogram, plan conformity, plan homogeneity, low-dose wash, and beam parameters. Standard descriptive statistics were used to describe the data. Despite a standardized planning protocol, we found variability was present in all steps of the planning process. Deviations from protocol contours by radiation oncologists and radiation therapists occurred in 12% and 50% of cases, respectively, and the number of optimization parameters ranged from 12 to 27 (median 17). This contributed to conflicts within the optimization process reflected by the mean composite objective value of 0.07 (range 0.01 to 0.44). Methods used to control low-intermediate dose wash were inconsistent. At the PTV rectum interface, the dose-gradient distance from the 74.1 Gy to 40 Gy isodose ranged from 0.6 cm to 2.0 cm (median 1.0 cm). Increasing collimator angle was associated with a decrease in monitor units and a single full 6 MV arc was sufficient for the majority of plans. A significant relationship was found between clinical target volume-rectum distance and rectal tolerances achieved. A linear relationship was determined between the PTV volume and volume of 40 Gy isodose. Objective values and composite objective values were useful in determining plan quality. Anatomic geometry and overlap of structures has a measurable impact on the plan quality achieved for prostate patients being treated with VMAT. By evaluating multiple planning variables, we have been able to determine important factors influencing plan quality and develop predictive models for quality metrics that have been incorporated into our new protocol and will be tested and refined in future studies. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Furdová, Alena; Sramka, Miron; Thurzo, Andrej; Furdová, Adriana
2017-01-01
Objective The objective of this study was to determine the use of 3D printed model of an eye with intraocular tumor for linear accelerator-based stereotactic radiosurgery. Methods The software for segmentation (3D Slicer) created virtual 3D model of eye globe with tumorous mass based on tissue density from computed tomography and magnetic resonance imaging data. A virtual model was then processed in the slicing software (Simplify3D®) and printed on 3D printer using fused deposition modeling technology. The material that was used for printing was polylactic acid. Results In 2015, stereotactic planning scheme was optimized with the help of 3D printed model of the patient’s eye with intraocular tumor. In the period 2001–2015, a group of 150 patients with uveal melanoma (139 choroidal melanoma and 11 ciliary body melanoma) were treated. The median tumor volume was 0.5 cm3 (0.2–1.6 cm3). The radiation dose was 35.0 Gy by 99% of dose volume histogram. Conclusion The 3D printed model of eye with tumor was helpful in planning the process to achieve the optimal scheme for irradiation which requires high accuracy of defining the targeted tumor mass and critical structures. PMID:28203052
NASA Astrophysics Data System (ADS)
Piao, Linfeng; Park, Hyungmin; Jo, Chris
2016-11-01
We present a theoretical model of the recovery rate of platelet and white blood cell in the process of centrifugal separation of platelet-rich plasma (PRP). For the practically used conditions in the field, the separation process is modeled as a one-dimensional particle sedimentation; a quasi-linear partial differential equation is derived based on the kinematic-wave theory. This is solved to determine the interface positions between supernatant-suspension and suspension-sediment, used to estimate the recovery rate of the plasma. While correcting the Brown's hypothesis (1989) claiming that the platelet recovery is linearly proportional to that of plasma, we propose a new correlation model for prediction of the platelet recovery, which is a function of the volume of whole blood, centrifugal acceleration and time. For a range of practical parameters, such as hematocrit, volume of whole blood and centrifugation (time and acceleration), the predicted recovery rate shows a good agreement with available clinical data. We propose that this model is further used to optimize the preparation method of PRP that satisfies the customized case. Supported by a Grant (MPSS-CG-2016-02) through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government.
Multiobjective optimization of low impact development stormwater controls
NASA Astrophysics Data System (ADS)
Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati
2018-07-01
Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.
Sassa, Yuko; Taki, Yasuyuki; Takeuchi, Hikaru; Hashizume, Hiroshi; Asano, Michiko; Asano, Kohei; Wakabayashi, Akio; Kawashima, Ryuta
2012-05-01
The abilities to empathize and to systemize, two fundamental dimensions of cognitive style, are characterized by apparent individual differences. These abilities are typically measured using an empathizing quotient (EQ) and a systemizing quotient (SQ) questionnaire, respectively. The purpose of this study was to reveal any correlations between EQ and SQ scores and regional gray matter volumes in healthy children by applying voxel-based morphometry to magnetic resonance images. We collected MRIs of brain structure and administered children's versions of the EQ and SQ questionnaires (EQ-C and SQ-C, respectively) to 261 healthy children aged 5-15 years. Structural MRI data were segmented, normalized, and smoothed using an optimized voxel-based morphometric analysis. Next, we analyzed the correlation between regional gray matter volume and EQ-C and SQ-C scores adjusting for age, sex, and intracranial volume. The EQ-C scores showed significant positive correlations with the regional gray matter volumes of the left fronto-opercular and superior temporal cortices, including the precentral gyrus, the inferior frontal gyrus, the superior temporal gyrus, and the insula, which are functionally related to empathic processing. Additionally, SQ-C scores showed a significant negative correlation with the regional gray matter volume of the left posterior parietal cortex, which is functionally involved in selective attention processing. Our findings suggest that individual differences in cognitive style pertaining to empathizing or systemizing abilities could be explained by differences in the volume of brain structures that are functionally relevant to empathizing and systemizing. Copyright © 2012 Elsevier Inc. All rights reserved.
Design optimum frac jobs using virtual intelligence techniques
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam
2000-10-01
Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These networks will be used as the fitness function for a genetic algorithm routine that will search for the best combination of the design parameters for the frac job. The genetic algorithm will search through the entire solution space and identify the optimal combination of parameters to be used in the design process. Considering the complexity of this task this methodology converges relatively fast, providing the engineer with several near-optimum scenarios for the frac job design. These scenarios, which can be achieved in just a minute or two, can be valuable initial points for the engineer to start his/her design job and save him/her hours of runs on the simulator.
Liu, Wei; Schild, Steven E.; Chang, Joe Y.; Liao, Zhongxing; Chang, Yu-Hui; Wen, Zhifei; Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle; Sahoo, Narayan; Herman, Michael G.; Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin
2015-01-01
Background To compare the impact of uncertainties and interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods IMPT plans were created for 11 non-randomly selected non-small-cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D CTs to irradiate clinical target volume (CTV). Regular fractionation (66 Gy[RBE] in 33 fractions) were considered. In 4D optimization, the CTV of individual phases received non-uniform doses to achieve a uniform cumulative dose. The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed-rank test. Results 4D robust optimization plans led to smaller AUC for CTV (14.26 vs. 18.61 (p=0.001), better CTV coverage (Gy[RBE]) [D95% CTV: 60.6 vs 55.2 (p=0.001)], and better CTV homogeneity [D5%–D95% CTV: 10.3 vs 17.7 (p=0.002)] in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage [D95% CTV: 64.5 vs 63.8 (p=0.0068)], comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions. PMID:26725727
Fluid therapy and the hypovolemic microcirculation.
Gruartmoner, G; Mesquida, J; Ince, Can
2015-08-01
In shock states, optimizing intravascular volume is crucial to promote an adequate oxygen delivery to the tissues. Our current practice in fluid management pivots on the Frank-Starling law of the heart, and the effects of fluids are measured according to the induced changes on stroke volume. The purpose of this review is to evaluate the boundaries of current macrohemodynamic approach to fluid administration, and to introduce the microcirculatory integration as a fundamental part of tissue perfusion monitoring. Macrocirculatory changes induced by volume expansion are not always coupled to proportional changes in microcirculatory perfusion. Loss of hemodynamic coherence limits the value of guiding fluid therapy according to macrohemodynamics, and highlights the importance of evaluating the ultimate target of volume administration, the microcirculation. Current approach to intravascular volume optimization is made from a macrohemodynamic perspective. However, several situations wherein macrocirculatory and microcirculatory coherence is lost have been described. Future clinical trials should explore the usefulness of integrating the microcirculatory evaluation in fluid optimization.
Catalytic distillation water recovery subsystem
NASA Technical Reports Server (NTRS)
Budininkas, P.; Rasouli, F.
1985-01-01
An integrated engineering breadboard subsystem for the recovery of potable water from untreated urine based on the vapor phase catalytic ammonia removal was designed, fabricated and tested. Unlike other evaporative methods, this process catalytically oxidizes ammonia and volatile hydrocarbons vaporizing with water to innocuous products; therefore, no pretreatment of urine is required. Since the subsystem is fabricated from commercially available components, its volume, weight and power requirements are not optimized; however, it is suitable for zero-g operation. The testing program consists of parametric tests, one month of daily tests and a continuous test of 168 hours duration. The recovered water is clear, odorless, low in ammonia and organic carbon, and requires only an adjustment of its pH to meet potable water standards. The obtained data indicate that the vapor phase catalytic ammonia removal process, if further developed, would also be competitive with other water recovery systems in weight, volume and power requirements.
Magnetite-doped polydimethylsiloxane (PDMS) for phosphopeptide enrichment.
Sandison, Mairi E; Jensen, K Tveen; Gesellchen, F; Cooper, J M; Pitt, A R
2014-10-07
Reversible phosphorylation plays a key role in numerous biological processes. Mass spectrometry-based approaches are commonly used to analyze protein phosphorylation, but such analysis is challenging, largely due to the low phosphorylation stoichiometry. Hence, a number of phosphopeptide enrichment strategies have been developed, including metal oxide affinity chromatography (MOAC). Here, we describe a new material for performing MOAC that employs a magnetite-doped polydimethylsiloxane (PDMS), that is suitable for the creation of microwell array and microfluidic systems to enable low volume, high throughput analysis. Incubation time and sample loading were explored and optimized and demonstrate that the embedded magnetite is able to enrich phosphopeptides. This substrate-based approach is rapid, straightforward and suitable for simultaneously performing multiple, low volume enrichments.
NASA Astrophysics Data System (ADS)
Eclancher, Bernard; Arntz, Y.; Chambron, Jacques; Prat, Vincent; Perret, C.; Karman, Miklos; Pszota, Agnes; Nemeth, Laszlo
1999-10-01
A hand-size probe including 64 elementary 5 X 5 X 2 mm CdTe detectors has been optimized to detect the (gamma) tracer 99Tc in the heart left ventricle. The system, has been developed, not for imaging, allowing acquisitions at 33 Hz to describe the labeled blood volume variations. The (gamma) -counts variations were found accurately proportional to the known volume variations of an artificial ventricle paced at variable rate and systolic volume. Softwares for on line data monitoring and for post-processing have been developed for beat to beat assessment of cardiac performance at rest and during physical exercise. The evaluation of this probe has been performed on 5 subjects in the Nucl Dep of Balatonfured Cardiology Hospital. It appears that the probe needs to be better shielded to work properly in the hot environment of the ventricle, but can provide reliable ventriculography, even under heavy exercise load, although the ventricle volume itself is unknown.
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
Clinical knowledge-based inverse treatment planning
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-11-01
Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.
Thermodynamics fundamentals of energy conversion
NASA Astrophysics Data System (ADS)
Dan, Nicolae
The work reported in the chapters 1-5 focuses on the fundamentals of heat transfer, fluid dynamics, thermodynamics and electrical phenomena related to the conversion of one form of energy to another. Chapter 6 is a re-examination of the fundamental heat transfer problem of how to connect a finite-size heat generating volume to a concentrated sink. Chapter 1 extends to electrical machines the combined thermodynamics and heat transfer optimization approach that has been developed for heat engines. The conversion efficiency at maximum power is 1/2. When, as in specific applications, the operating temperature of windings must not exceed a specified level, the power output is lower and efficiency higher. Chapter 2 addresses the fundamental problem of determining the optimal history (regime of operation) of a battery so that the work output is maximum. Chapters 3 and 4 report the energy conversion aspects of an expanding mixture of hot particles, steam and liquid water. At the elemental level, steam annuli develop around the spherical drops as time increases. At the mixture level, the density decreases while the pressure and velocity increases. Chapter 4 describes numerically, based on the finite element method, the time evolution of the expanding mixture of hot spherical particles, steam and water. The fluid particles are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Chapter 5 describes the process of thermal interaction between the molten material and water. In the second part of the chapter the model accounts for the irreversibility due to the flow of the mixture through the cracks of the mixing vessel. The approach presented in this chapter is based on exergy analysis and represents a departure from the line of inquiry that was followed in chapters 3-4. Chapter 6 shows that the geometry of the heat flow path between a volume and one point can be optimized in two fundamentally different ways. In the "growth" method the structure is optimized starting from the smallest volume element of fixed size. In "design" method the overall volume is fixed, and the designer works "inward" by increasing the internal complexity of the paths for heat flow.
Dey Paul, Indira; Jayakumar, Chitra; Niwas Mishra, Hari
2016-12-01
In spite of being highly nutritious, the consumption of milk is hindered because of its high cholesterol content, which is responsible for numerous cardiac diseases. Supercritical carbon dioxide using ethanol as co-solvent was employed to extract cholesterol from whole milk powder (WMP). This study was undertaken to optimize the process parameters of supercritical fluid extraction (SCFE), viz. extraction temperature, pressure and volume of ethanol. The cholesterol content of WMP was quantified using high-performance liquid chromatography. The impact of the extraction conditions on the fat content (FC), solubility index (SI) and lightness (L*) of the SCFE-treated WMP were also investigated. The process parameters were optimized using response surface methodology. About 46% reduction in cholesterol was achieved at the optimized conditions of 48 °C, 17 MPa and 31 mL co-solvent; flow rate of expanded CO 2 , static time and dynamic time of extraction were 6 L min -1 , 10 min and 80 min respectively. The treated WMP retained its FC, SI, and L* at moderate limits of 183.67 g kg -1 , 96.3% and 96.90, respectively. This study demonstrated the feasibility of ethanol-modified SCFE of cholesterol from WMP with negligible changes in its physicochemical properties. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Kante, Karifala; Nieto-Delgado, Cesar; Rangel-Mendez, J Rene; Bandosz, Teresa J
2012-01-30
Activated carbons were prepared from spent ground coffee. Zinc chloride was used as an activation agent. The obtained materials were used as a media for separation of hydrogen sulfide from air at ambient conditions. The materials were characterized using adsorption of nitrogen, elemental analysis, SEM, FTIR, and thermal analysis. Surface features of the carbons depend on the amount of an activation agent used. Even though the residual inorganic matter takes part in the H(2)S retention via salt formation, the porous surface of carbons governs the separation process. The chemical activation method chosen resulted in formation of large volume of pores with sizes between 10 and 30Å, optimal for water and hydrogen sulfide adsorption. Even though the activation process can be optimized/changed, the presence of nitrogen in the precursor (caffeine) is a significant asset of that specific organic waste. Nitrogen functional groups play a catalytic role in hydrogen sulfide oxidation. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Song; Zhang, Shengzhou; Zhang, Libo; Xia, Hongying; Peng, Jinhui; Wang, Shixing
2017-09-01
Eupatorium adenophorum, global exotic weeds, was utilized as feedstock for preparation of activated carbon (AC) via microwave-induced KOH activation. Influences of the three vital process parameters - microwave power, activation time and impregnation ratio (IR) - have been assessed on the adsorption capacity and yield of AC. The process parameters were optimized utilizing the Design Expert software and were identified to be a microwave power of 700 W, an activation time of 15 min and an IR of 4, with the resultant iodine adsorption number and yield being 2,621 mg/g and 28.25 %, respectively. The key parameters that characterize the AC such as the brunauer emmett teller (BET) surface area, total pore volume and average pore diameter were estimated to be 3,918 m2/g, 2,383 ml/g and 2.43 nm, respectively, under the optimized process conditions. The surface characteristics of AC were characterized by Fourier transform infrared spectroscopy, scanning electron microscope and Transmission electron microscope.
Taschek, Marco; Egermann, Jan; Schwarz, Sabrina; Leipertz, Alfred
2005-11-01
Optimum fuel preparation and mixture formation are core issues in the development of modern direct-injection (DI) Diesel engines, as these are crucial for defining the border conditions for the subsequent combustion and pollutant formation process. The local fuel/air ratio can be seen as one of the key parameters for this optimization process, as it allows the characterization and comparison of the mixture formation quality. For what is the first time to the best of our knowledge, linear Raman spectroscopy is used to detect the fuel/air ratio and its change along a line of a few millimeters directly and nonintrusively inside the combustion bowl of a DI Diesel engine. By a careful optimization of the measurement setup, the weak Raman signals could be separated successfully from disturbing interferences. A simultaneous measurement of the densities of air and fuel was possible along a line of about 10 mm length, allowing a time- and space-resolved measurement of the local fuel/air ratio. This could be performed in a nonreacting atmosphere as well as during fired operating conditions. The positioning of the measurement volume next to the interaction point of one of the spray jets with the wall of the combustion bowl allowed a near-wall analysis of the mixture formation process for a six-hole nozzle under varying injection and engine conditions. The results clearly show the influence of the nozzle geometry and preinjection on the mixing process. In contrast, modulation of the intake air temperature merely led to minor changes of the fuel concentration in the measurement volume.
NASA Astrophysics Data System (ADS)
Taschek, Marco; Egermann, Jan; Schwarz, Sabrina; Leipertz, Alfred
2005-11-01
Optimum fuel preparation and mixture formation are core issues in the development of modern direct-injection (DI) Diesel engines, as these are crucial for defining the border conditions for the subsequent combustion and pollutant formation process. The local fuel/air ratio can be seen as one of the key parameters for this optimization process, as it allows the characterization and comparison of the mixture formation quality. For what is the first time to the best of our knowledge, linear Raman spectroscopy is used to detect the fuel/air ratio and its change along a line of a few millimeters directly and nonintrusively inside the combustion bowl of a DI Diesel engine. By a careful optimization of the measurement setup, the weak Raman signals could be separated successfully from disturbing interferences. A simultaneous measurement of the densities of air and fuel was possible along a line of about 10 mm length, allowing a time- and space-resolved measurement of the local fuel/air ratio. This could be performed in a nonreacting atmosphere as well as during fired operating conditions. The positioning of the measurement volume next to the interaction point of one of the spray jets with the wall of the combustion bowl allowed a near-wall analysis of the mixture formation process for a six-hole nozzle under varying injection and engine conditions. The results clearly show the influence of the nozzle geometry and preinjection on the mixing process. In contrast, modulation of the intake air temperature merely led to minor changes of the fuel concentration in the measurement volume.
Hamzaoui, Mahmoud; Hubert, Jane; Reynaud, Romain; Marchal, Luc; Foucault, Alain; Renault, Jean-Hugues
2012-07-20
The aim of this article was to evaluate the influence of the column design of a hydrostatic support-free liquid-liquid chromatography device on the process efficiency when the strong ion-exchange (SIX) development mode is used. The purification of p-hydroxybenzylglucosinolate (sinalbin) from a crude aqueous extract of white mustard seeds (Sinapis alba L.) was achieved on two types of devices: a centrifugal partition chromatograph (CPC) and a centrifugal partition extractor (CPE). They differ in the number, volume and geometry of their partition cells. The SIX-CPE process was evaluated in terms of productivity and sinalbin purification capability as compared to previously optimized SIX-CPC protocols that were carried out on columns of 200 mL and 5700 mL inner volume, respectively. The objective was to determine whether the decrease in partition cell number, the increase in their volume and the use of a "twin cell" design would induce a significant increase in productivity by applying higher mobile phase flow rate while maintaining a constant separation quality. 4.6g of sinalbin (92% recovery) were isolated from 25 g of a crude white mustard seed extract, in only 32 min and with a purity of 94.7%, thus corresponding to a productivity of 28 g per hour and per liter of column volume (g/h/LV(c)). Therefore, the SIX-CPE process demonstrates promising industrial technology transfer perspectives for the large-scale isolation of ionized natural products. Copyright © 2012 Elsevier B.V. All rights reserved.
Fernández, Purificación; Fernández, Ana M; Bermejo, Ana M; Lorenzo, Rosa A; Carro, Antonia M
2013-04-01
The performance of microwave-assisted extraction and HPLC with photodiode array detection method for determination of six analgesic and anti-inflammatory drugs from plasma and urine, is described, optimized, and validated. Several parameters affecting the extraction technique were optimized using experimental designs. A four-factor (temperature, phosphate buffer pH 4.0 volume, extraction solvent volume, and time) hybrid experimental design was used for extraction optimization in plasma, and three-factor (temperature, extraction solvent volume, and time) Doehlert design was chosen to extraction optimization in urine. The use of desirability functions revealed the optimal extraction conditions as follows: 67°C, 4 mL phosphate buffer pH 4.0, 12 mL of ethyl acetate and 9 min, for plasma and the same volume of buffer and ethyl acetate, 115°C and 4 min for urine. Limits of detection ranged from 4 to 45 ng/mL in plasma and from 8 to 85 ng/mL in urine. The reproducibility evaluated at two concentration levels was less than 6.5% for both specimens. The recoveries were from 89 to 99% for plasma and from 83 to 99% for urine. The proposed method was successfully applied in plasma and urine samples obtained from analgesic users. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Matsumoto, Akihiro; Murao, Satoshi; Matsumoto, Michiko; Watanabe, Chie; Murakami, Masahiro
The feasibility of fabricating Janus particles based on phase separation between a hard fat and a biocompatible polymer was investigated. The solvent evaporation method used involved preparing an oil-in-water (o/w) emulsion with a mixture of poly (lactic-co-glycolic) acid (PLGA), hard fat, and an organic solvent as the oil phase and a polyvinyl alcohol aqueous solution as the water phase. The Janus particles were formed when the solvent was evaporated to obtain certain concentrations of PLGA and hard fat in the oil phase, at which phase separation was estimated to occur based on the phase diagram analysis. The hard fat hemisphere was proven to be the oil phase using a lipophilic dye Oil Red O. When the solvent evaporation process was performed maintaining a specific volume during the emulsification process; Janus particles were formed within 1.5 h. However, the formed Janus particles were destroyed by stirring for over 6 h. In contrast, a few Janus particles were formed when enough water to dissolve the oil phase solvent was added to the emulsion immediately after the emulsification process. The optimized volume of the solvent evaporation medium dominantly formed Janus particles and maintained the conformation for over 6 h with stirring. These results indicate that the formation and stability of Janus particles depend on the rate of solvent evaporation. Therefore, optimization of the solvent evaporation rate is critical to obtaining stable PLGA and hard fat Janus particles.
An algorithm for automatic parameter adjustment for brain extraction in BrainSuite
NASA Astrophysics Data System (ADS)
Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.
2017-02-01
Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.
Madkaikar, M; Gupta, M; Ghosh, K; Swaminathan, S; Sonawane, L; Mohanty, D
2007-01-01
Human cord blood is now an established source of stem cells for haematopoietic reconstitution. Red blood cell (RBC) depletion is required to reduce the cord blood unit volume for commercial banking. Red cell sedimentation using hydroxy ethyl starch (HES) is a standard procedure in most cord blood banks. However, while standardising the procedure for cord blood banking, a significant loss of nucleated cells (NC) may be encountered during standard HES sedimentation protocols. This study compares four procedures for cord blood processing to obtain optimal yield of nucleated cells. Gelatin, dextran, 6% HES and 6% HES with an equal volume of phosphate-buffered saline (PBS) were compared for RBC depletion and NC recovery. Dilution of the cord blood unit with an equal volume of PBS prior to sedimentation with HES resulted in maximum NC recovery (99% [99.5 +/- 1.3%]). Although standard procedures using 6% HES are well established in Western countries, they may not be applicable in India, as a variety of factors that can affect RBC sedimentation (e.g., iron deficiency, hypoalbuminaemia, thalassaemia trait, etc.) may reduce RBC sedimentation and thus reduce NC recovery. While diluting cord blood with an equal volume of PBS is a simple method to improve the NC recovery, it does involve an additional processing step.
Co-optimization of lithographic and patterning processes for improved EPE performance
NASA Astrophysics Data System (ADS)
Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane
2017-03-01
Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.
Lew, Virgilio L; Tiffert, Teresa
2017-01-01
In a healthy adult, the transport of O 2 and CO 2 between lungs and tissues is performed by about 2 · 10 13 red blood cells, of which around 1.7 · 10 11 are renewed every day, a turnover resulting from an average circulatory lifespan of about 120 days. Cellular lifespan is the result of an evolutionary balance between the energy costs of maintaining cells in a fit functional state versus cell renewal. In this Review we examine how the set of passive and active membrane transporters of the mature red blood cells interact to maximize their circulatory longevity thus minimizing costs on expensive cell turnover. Red blood cell deformability is critical for optimal rheology and gas exchange functionality during capillary flow, best fulfilled when the volume of each human red blood cell is kept at a fraction of about 0.55-0.60 of the maximal spherical volume allowed by its membrane area, the optimal-volume-ratio range. The extent to which red blood cell volumes can be preserved within or near these narrow optimal-volume-ratio margins determines the potential for circulatory longevity. We show that the low cation permeability of red blood cells allows volume stability to be achieved with extraordinary cost-efficiency, favouring cell longevity over cell turnover. We suggest a mechanism by which the interplay of a declining sodium pump and two passive membrane transporters, the mechanosensitive PIEZO1 channel, a candidate mediator of P sickle in sickle cells, and the Ca 2+ -sensitive, K + -selective Gardos channel, can implement red blood cell volume stability around the optimal-volume-ratio range, as required for extended circulatory longevity.
Lew, Virgilio L.; Tiffert, Teresa
2017-01-01
In a healthy adult, the transport of O2 and CO2 between lungs and tissues is performed by about 2 · 1013 red blood cells, of which around 1.7 · 1011 are renewed every day, a turnover resulting from an average circulatory lifespan of about 120 days. Cellular lifespan is the result of an evolutionary balance between the energy costs of maintaining cells in a fit functional state versus cell renewal. In this Review we examine how the set of passive and active membrane transporters of the mature red blood cells interact to maximize their circulatory longevity thus minimizing costs on expensive cell turnover. Red blood cell deformability is critical for optimal rheology and gas exchange functionality during capillary flow, best fulfilled when the volume of each human red blood cell is kept at a fraction of about 0.55–0.60 of the maximal spherical volume allowed by its membrane area, the optimal-volume-ratio range. The extent to which red blood cell volumes can be preserved within or near these narrow optimal-volume-ratio margins determines the potential for circulatory longevity. We show that the low cation permeability of red blood cells allows volume stability to be achieved with extraordinary cost-efficiency, favouring cell longevity over cell turnover. We suggest a mechanism by which the interplay of a declining sodium pump and two passive membrane transporters, the mechanosensitive PIEZO1 channel, a candidate mediator of Psickle in sickle cells, and the Ca2+-sensitive, K+-selective Gardos channel, can implement red blood cell volume stability around the optimal-volume-ratio range, as required for extended circulatory longevity. PMID:29311949
Defontaine, Anne; Tirel, Olivier; Costet, Nathalie; Beuchée, Alain; Ozanne, Bruno; Gaillot, Théophile; Arnaud, Alexis Pierre; Wodey, Eric
2016-02-01
To determine the optimal saline volume bladder instillation to measure intravesical pressure in critically ill newborns weighing less than 4.5 kg, and to establish a reference of intra-abdominal pressure value in this population. Prospective monocentric study. Neonatal ICU and PICU. Newborns, premature or not, weighing less than 4.5 kg who required a urethral catheter. Patients were classified into two groups according to whether they presented a risk factor for intra-abdominal hypertension. Nine intravesical pressure measures per patient were performed after different volume saline instillation. The first one was done without saline instillation and then by increments of 0.5 mL/kg to a maximum of 4 mL/kg. Linear models for repeated measurements of intravesical pressure with unstructured covariance were used to analyze the variation of intravesical pressure measures according to the conditions of measurement (volume instilled). Pairwise comparisons of intravesical pressure adjusted mean values between instillation volumes were done using Tukey tests, corrected for multiple testing to determine an optimal instillation volume. Forty-seven patients with completed measures (nine instillations volumes) were included in the analysis. Mean intravesical pressure values were not significantly different when measured after instillation of 0.5, 1, or 1.5 mL/kg, whereas measures after instillation of 2 mL/kg or more were significantly higher. The median intravesical pressure value in the group without intra-abdominal hypertension risk factor after instillation of 1 mL/kg was 5 mm Hg (2-6 mm Hg). The optimal saline volume bladder instillation to measure intra-abdominal pressure in newborns weighing less than 4.5 kg was 1 mL/kg. Reference intra-abdominal pressure in this population was found to be 5 mm Hg (2-6 mm Hg).
NASA Astrophysics Data System (ADS)
Bogoslovskii, S. Yu; Kuznetsov, N. N.; Boldyrev, V. S.
2017-11-01
Electrochlorination parameters were optimized in flowing and non-flowing modes for a cell with a volume of 1 l. At a current density of 0.1 A/cm2 in the range of flow rates from 0.8 to 6.0 l/h with a temperature of the initial solution below 20°C the outlet temperature is maintained close to the optimal 40°C. The pH of the solution during electrolysis increases to 8.8 ÷ 9.4. There was studied a process in which a solution with a temperature of 7-8°C and a concentration of sodium chloride of 25 and 35 g/l in non-flowing cell was used. The dependence of the concentration of active chlorine on the electrolysis time varies with the concentration of the initial solution of sodium chloride. In case of chloride concentration of 25 g/l virtually linear relationship makes it easy to choose the time of electrolysis with the aim of obtaining the needed concentration of the product.
Optimization of the nitrification process of wastewater resulting from cassava starch production.
Fleck, Leandro; Ferreira Tavares, Maria Hermínia; Eyng, Eduardo; Orssatto, Fabio
2018-05-14
The present study has the objective of optimizing operational conditions of an aerated reactor applied to the removal of ammoniacal nitrogen from wastewater resulting from the production of cassava starch. An aerated reactor with a usable volume of 4 L and aeration control by rotameter was used. The airflow and cycle time parameters were controlled and their effects on the removal of ammoniacal nitrogen and the conversion to nitrate were evaluated. The highest ammoniacal nitrogen removal, of 96.62%, occurred under conditions of 24 h and 0.15 L min -1 L reactor -1 . The highest nitrate conversion, of 24.81%, occurred under conditions of 40.92 h and 0.15 L min -1 L reactor -1 . The remaining value of ammoniacal nitrogen was converted primarily into nitrite, energy, hydrogen and water. The optimal operational values of the aerated reactor are 29.25 h and 0.22 L min -1 L reactor -1 . The mathematical models representative of the process satisfactorily describe ammoniacal nitrogen removal efficiency and nitrate conversion, presenting errors of 2.87% and 3.70%, respectively.
Effects of shape parameters on the attractiveness of a female body.
Fan, J; Dai, W; Qian, X; Chau, K P; Liu, Q
2007-08-01
Various researchers have suggested that certain anthropometric ratios can be used to measure female body attractiveness, including the waist to hip ratio, Body Mass Index (BMI), and the body volume divided by the square of the height (Volume-Height Index). Based on a wide range of female subjects and virtual images of bodies with different ratios, Volume-Height Index was found to provide the best fit with female body attractiveness, and the effect of Volume-Height Index can be fitted with two half bell-shaped exponential curves with an optimal Volume-Height Index at 14.2 liter/m2. It is suggested that the general trend of the effect of Volume-Height Index may be culturally invariant, but the optimal value of Volume-Height Index may vary from culture to culture. In addition to Volume-Height Index, other body parameters or ratios which reflect body proportions and the traits of feminine characteristics had smaller but significant effects on female body attractiveness, and such effects were stronger at optimum Volume-Height Index.
Rodrigues, Sueli; Pinto, Gustavo A S; Fernandes, Fabiano A N
2008-01-01
Coconut is a tropical fruit largely consumed in many countries. In some areas of the Brazilian coast, coconut shell represents more than 60% of the domestic waste volume. The coconut shell is composed mainly of lignin and cellulose, having a chemical composition very similar to wood and suitable for phenolic extraction. In this work, the use of ultrasound to extract phenolic compounds from coconut shell was evaluated. The effect of temperature, solution to solid ratio, pH and extraction time were evaluated through a 2(4) experimental planning. The extraction process was also optimized using surface response methodology. At the optimum operating condition (30 degrees C, solution to solid ratio of 50, 15 min of extraction and pH 6.5) the process yielded 22.44 mg of phenolic compounds per gram of coconut shell.
Teżyk, Michał; Jakubowska, Emilia; Milanowski, Bartłomiej; Lulek, Janina
2017-10-01
The aim of this study was to optimize the process of tablets compression and identification of film-coating critical process parameters (CPPs) affecting critical quality attributes (CQAs) using quality by design (QbD) approach. Design of experiment (DOE) and regression methods were employed to investigate hardness, disintegration time, and thickness of uncoated tablets depending on slugging and tableting compression force (CPPs). Plackett-Burman experimental design was applied to identify critical coating process parameters among selected ones that is: drying and preheating time, atomization air pressure, spray rate, air volume, inlet air temperature, and drum pressure that may influence the hardness and disintegration time of coated tablets. As a result of the research, design space was established to facilitate an in-depth understanding of existing relationship between CPPs and CQAs of intermediate product (uncoated tablets). Screening revealed that spray rate and inlet air temperature are two most important factors that affect the hardness of coated tablets. Simultaneously, none of the tested coating factors have influence on disintegration time. The observation was confirmed by conducting film coating of pilot size batches.
CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000
2000-06-01
Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition modelsmore » were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.« less
Mass and Volume Optimization of Space Flight Medical Kits
NASA Technical Reports Server (NTRS)
Keenan, A. B.; Foy, Millennia Hope; Myers, Jerry
2014-01-01
Resource allocation is a critical aspect of space mission planning. All resources, including medical resources, are subject to a number of mission constraints such a maximum mass and volume. However, unlike many resources, there is often limited understanding in how to optimize medical resources for a mission. The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulates outcomes and describes the impact of medical events in terms of lost crew time, medical resource usage, and the potential for medically required evacuation. Previously published work describes an approach that uses the IMM to generate optimized medical kits that maximize benefit to the crew subject to mass and volume constraints. We improve upon the results obtained previously and extend our approach to minimize mass and volume while meeting some benefit threshold. METHODS We frame the medical kit optimization problem as a modified knapsack problem and implement an algorithm utilizing dynamic programming. Using this algorithm, optimized medical kits were generated for 3 mission scenarios with the goal of minimizing the medical kit mass and volume for a specified likelihood of evacuation or Crew Health Index (CHI) threshold. The algorithm was expanded to generate medical kits that maximize likelihood of evacuation or CHI subject to mass and volume constraints. RESULTS AND CONCLUSIONS In maximizing benefit to crew health subject to certain constraints, our algorithm generates medical kits that more closely resemble the unlimited-resource scenario than previous approaches which leverage medical risk information generated by the IMM. Our work here demonstrates that this algorithm provides an efficient and effective means to objectively allocate medical resources for spaceflight missions and provides an effective means of addressing tradeoffs in medical resource allocations and crew mission success parameters.
Evaluation of an artificial intelligence guided inverse planning system: clinical case study.
Yan, Hui; Yin, Fang-Fang; Willett, Christopher
2007-04-01
An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, J; Ma, L
2015-06-15
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beammore » numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.« less
Process Performance of Optima XEx Single Wafer High Energy Implanter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J. H.; Yoon, Jongyoon; Kondratenko, S.
2011-01-07
To meet the process requirements for well formation in future CMOS memory production, high energy implanters require more robust angle, dose, and energy control while maintaining high productivity. The Optima XEx high energy implanter meets these requirements by integrating a traditional LINAC beamline with a robust single wafer handling system. To achieve beam angle control, Optima XEx can control both the horizontal and vertical beam angles to within 0.1 degrees using advanced beam angle measurement and correction. Accurate energy calibration and energy trim functions accelerate process matching by eliminating energy calibration errors. The large volume process chamber and UDC (upstreammore » dose control) using faraday cups outside of the process chamber precisely control implant dose regardless of any chamber pressure increase due to PR (photoresist) outgassing. An optimized RF LINAC accelerator improves reliability and enables singly charged phosphorus and boron energies up to 1200 keV and 1500 keV respectively with higher beam currents. A new single wafer endstation combined with increased beam performance leads to overall increased productivity. We report on the advanced performance of Optima XEx observed during tool installation and volume production at an advanced memory fab.« less
Design and development of a microfluidic platform for use with colorimetric gold nanoprobe assays
NASA Astrophysics Data System (ADS)
Bernacka-Wojcik, Iwona
Due to the importance and wide applications of the DNA analysis, there is a need to make genetic analysis more available and more affordable. As such, the aim of this PhD thesis is to optimize a colorimetric DNA biosensor based on gold nanoprobes developed in CEMOP by reducing its price and the needed volume of solution without compromising the device sensitivity and reliability, towards the point of care use. Firstly, the price of the biosensor was decreased by replacing the silicon photodetector by a low cost, solution processed TiO2 photodetector. To further reduce the photodetector price, a novel fabrication method was developed: a cost-effective inkjet printing technology that enabled to increase TiO2 surface area. Secondly, the DNA biosensor was optimized by means of microfluidics that offer advantages of miniaturization, much lower sample/reagents consumption, enhanced system performance and functionality by integrating different components. In the developed microfluidic platform, the optical path length was extended by detecting along the channel and the light was transmitted by optical fibres enabling to guide the light very close to the analysed solution. Microfluidic chip of high aspect ratio ( 13), smooth and nearly vertical sidewalls was fabricated in PDMS using a SU-8 mould for patterning. The platform coupled to the gold nanoprobe assay enabled detection of Mycobacterium tuberculosis using 3 mul on DNA solution, i.e. 20 times less than in the previous state-of-the-art. Subsequently, the bio-microfluidic platform was optimized in terms of cost, electrical signal processing and sensitivity to colour variation, yielding 160% improvement of colorimetric AuNPs analysis. Planar microlenses were incorporated to converge light into the sample and then to the output fibre core increasing 6 times the signal-to-losses ratio. The optimized platform enabled detection of single nucleotide polymorphism related with obesity risk (FTO) using target DNA concentration below the limit of detection of the conventionally used microplate reader (i.e. 15 ng/mul) with 10 times lower solution volume (3 mul). The combination of the unique optical properties of gold nanoprobes with microfluidic platform resulted in sensitive and accurate sensor for single nucleotide polymorphism detection operating using small volumes of solutions and without the need for substrate functionalization or sophisticated instrumentation. Simultaneously, to enable on chip reagents mixing, a PDMS micromixer was developed and optimized for the highest efficiency, low pressure drop and short mixing length. The optimized device shows 80% of mixing efficiency at Re = 0.1 in 2.5 mm long mixer with the pressure drop of 6 Pa, satisfying requirements for the application in the microfluidic platform for DNA analysis.
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
Genç, Nevim; Doğan, Esra Can; Narcı, Ali Oğuzhan; Bican, Emine
2017-05-01
In this study, a multi-response optimization method using Taguchi's robust design approach is proposed for imidacloprid removal by reverse osmosis. Tests were conducted with different membrane type (BW30, LFC-3, CPA-3), transmembrane pressure (TMP = 20, 25, 30 bar), volume reduction factor (VRF = 2, 3, 4), and pH (3, 7, 11). Quality and quantity of permeate are optimized with the multi-response characteristics of the total dissolved solid (TDS), conductivity, imidacloprid, and total organic carbon (TOC) rejection ratios and flux of permeate. The optimized conditions were determined as membrane type of BW30, TMP 30 bar, VRF 3, and pH 11. Under these conditions, TDS, conductivity, imidacloprid, and TOC rejections and permeate flux were 97.50 97.41, 97.80, 98.00% and 30.60 L/m2·h, respectively. Membrane type was obtained as the most effective factor; its contribution is 64%. The difference between the predicted and observed value of multi-response signal/noise (MRSN) is within the confidence interval.
Orthogonal optimization of a water hydraulic pilot-operated pressure-reducing valve
NASA Astrophysics Data System (ADS)
Mao, Xuyao; Wu, Chao; Li, Bin; Wu, Di
2017-12-01
In order to optimize the comprehensive characteristics of a water hydraulic pilot-operated pressure-reducing valve, numerical orthogonal experimental design was adopted. Six parameters of the valve, containing diameters of damping plugs, volume of spring chamber, half cone angle of main spool, half cone angle of pilot spool, mass of main spool and diameter of main spool, were selected as the orthogonal factors, and each factor has five different levels. An index of flowrate stability, pressure stability and pressure overstrike stability (iFPOS) was used to judge the merit of each orthogonal attempt. Embedded orthogonal process turned up and a final optimal combination of these parameters was obtained after totally 50 numerical orthogonal experiments. iFPOS could be low to a fairly low value which meant that the valve could have much better stabilities. During the optimization, it was also found the diameters of damping plugs and main spool played important roles in stability characteristics of the valve.
Automated prescription of oblique brain 3D magnetic resonance spectroscopic imaging.
Ozhinsky, Eugene; Vigneron, Daniel B; Chang, Susan M; Nelson, Sarah J
2013-04-01
Two major difficulties encountered in implementing Magnetic Resonance Spectroscopic Imaging (MRSI) in a clinical setting are limited coverage and difficulty in prescription. The goal of this project was to automate completely the process of 3D PRESS MRSI prescription, including placement of the selection box, saturation bands and shim volume, while maximizing the coverage of the brain. The automated prescription technique included acquisition of an anatomical MRI image, optimization of the oblique selection box parameters, optimization of the placement of outer-volume suppression saturation bands, and loading of the calculated parameters into a customized 3D MRSI pulse sequence. To validate the technique and compare its performance with existing protocols, 3D MRSI data were acquired from six exams from three healthy volunteers. To assess the performance of the automated 3D MRSI prescription for patients with brain tumors, the data were collected from 16 exams from 8 subjects with gliomas. This technique demonstrated robust coverage of the tumor, high consistency of prescription and very good data quality within the T2 lesion. Copyright © 2012 Wiley Periodicals, Inc.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Reducing the Volume of NASA Earth-Science Data
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre
2010-01-01
A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.
Optimizing the Distribution of Leg Muscles for Vertical Jumping
Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.
2016-01-01
A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal segments. PMID:26919645
A Cost-Effective Approach to Optimizing Microstructure and Magnetic Properties in Ce17Fe78B₆ Alloys.
Tan, Xiaohua; Li, Heyun; Xu, Hui; Han, Ke; Li, Weidan; Zhang, Fang
2017-07-28
Optimizing fabrication parameters for rapid solidification of Re-Fe-B (Re = Rare earth) alloys can lead to nanocrystalline products with hard magnetic properties without any heat-treatment. In this work, we enhanced the magnetic properties of Ce 17 Fe 78 B₆ ribbons by engineering both the microstructure and volume fraction of the Ce₂Fe 14 B phase through optimization of the chamber pressure and the wheel speed necessary for quenching the liquid. We explored the relationship between these two parameters (chamber pressure and wheel speed), and proposed an approach to identifying the experimental conditions most likely to yield homogenous microstructure and reproducible magnetic properties. Optimized experimental conditions resulted in a microstructure with homogeneously dispersed Ce₂Fe 14 B and CeFe₂ nanocrystals. The best magnetic properties were obtained at a chamber pressure of 0.05 MPa and a wheel speed of 15 m·s -1 . Without the conventional heat-treatment that is usually required, key magnetic properties were maximized by optimization processing parameters in rapid solidification of magnetic materials in a cost-effective manner.
Optimizing the construction of devices to control inaccesible surfaces - case study
NASA Astrophysics Data System (ADS)
Niţu, E. L.; Costea, A.; Iordache, M. D.; Rizea, A. D.; Babă, Al
2017-10-01
The modern concept for the evolution of manufacturing systems requires multi-criteria optimization of technological processes and equipments, prioritizing associated criteria according to their importance. Technological preparation of the manufacturing can be developed, depending on the volume of production, to the limit of favourable economical effects related to the recovery of the costs for the design and execution of the technological equipment. Devices, as subsystems of the technological system, in the general context of modernization and diversification of machines, tools, semi-finished products and drives, are made in a multitude of constructive variants, which in many cases do not allow their identification, study and improvement. This paper presents a case study in which the multi-criteria analysis of some structures, based on a general optimization method, of novelty character, is used in order to determine the optimal construction variant of a control device. The rational construction of the control device confirms that the optimization method and the proposed calculation methods are correct and determine a different system configuration, new features and functions, and a specific method of working to control inaccessible surfaces.
NASA Astrophysics Data System (ADS)
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.
Observing laser ablation dynamics with sub-picosecond temporal resolution
NASA Astrophysics Data System (ADS)
Tani, Shuntaro; Kobayashi, Yohei
2017-04-01
Laser ablation is one of the most fundamental processes in laser processing, and the understanding of its dynamics is of key importance for controlling and manipulating the outcome. In this study, we propose a novel way of observing the dynamics in the time domain using an electro-optic sampling technique. We found that an electromagnetic field was emitted during the laser ablation process and that the amplitude of the emission was closely correlated with the ablated volume. From the temporal profile of the electromagnetic field, we analyzed the motion of charged particles with subpicosecond temporal resolution. The proposed method can provide new access to observing laser ablation dynamics and thus open a new way to optimize the laser processing.
A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.
2013-04-15
Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
The association between resting functional connectivity and dispositional optimism.
Ran, Qian; Yang, Junyi; Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong
2017-01-01
Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism.
The association between resting functional connectivity and dispositional optimism
Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong
2017-01-01
Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism. PMID:28700613
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wei, E-mail: Liu.Wei@mayo.edu; Schild, Steven E.; Chang, Joe Y.
Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered.more » In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions.« less
TU-AB-303-01: A Feasibility Study for Dynamic Adaptive Therapy of Non-Small Cell Lung Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Phillips, M
2015-06-15
Purpose: To compare plans for NSCLC optimized using Dynamic Adaptive Therapy (DAT) with conventional IMRT optimization. DAT adapts plans based on changes in the target volume by using dynamic programing techniques to consider expected changes into the optimization process. Information gathered during treatment, e.g. from CBCT, is incorporated into the optimization. Methods and materials: DAT is formulated using stochastic control formalism, which minimizes the total expected number of tumor cells at the end of a treatment course subject to uncertainty inherent in the tumor response and organs-at-risk (OAR) dose constraints. This formulation allows for non-stationary dose distribution as well asmore » non-stationary fractional dose as needed to achieve a series of optimal plans that are conformal to tumor over time. Sixteen phantom cases with various sizes and locations of tumors, and OAR geometries were generated. Each case was planned with DAT and conventional IMRT (60Gy/30fx). Tumor volume change over time was obtained by using, daily MVCT-based, two-level cell population model. Monte Carlo simulations have been performed for each treatment course to account for uncertainty in tumor response. Same OAR dose constraints were applied for both methods. The frequency of plan modification was varied to 1, 2, 5 (weekly), and 29 (daily). The final average tumor dose and OAR doses have been compared to quantify the potential benefit of DAT. Results: The average tumor max, min, mean, and D95 resulted from DAT were 124.0–125.2%, 102.1–114.7%, 113.7–123.4%, and 102.0–115.9% (range dependent on the frequency of plan modification) of those from conventional IMRT. Cord max, esophagus max, lung mean, heart mean, and unspecified tissue D05 resulted from AT were 84–102.4%, 99.8–106.9%, 66.9–85.6%, 58.2–78.8%, and 85.2–94.0% of those from conventional IMRT. Conclusions: Significant tumor dose increase and OAR dose reduction, especially with parallel OAR with mean or dose-volume constraints, can be achieved using DAT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, R; Liu, A; Poenisch, F
Purpose: Treatment planning for Intensity Modulated Proton Therapy (IMPT) for head and neck cancer is time-consuming due to the large number of organs-at-risk (OAR) to be considered. As there are many competing objectives and also wide range of acceptable OAR constraints, the final approved plan may not be most optimal for the given structures. We evaluated the dose reduction to the contralateral parotid by implementing standardized constraints during optimization for scanning beam proton therapy planning. Methods: Twenty-four (24) consecutive patients previously treated for base of tongue carcinoma were retrospectively selected. The doses were 70Gy, 63Gy and 57Gy (SIB in 33more » fractions) for high-, intermediate-, and standard-risk clinical target volumes (CTV), respectively; the treatment included bilateral neck. Scanning beams using MFO with standardized bilateral anterior oblique and PA fields were applied. New plans where then developed and optimized by employing additional contralateral parotid constraints at multiple defined dose levels. Using a step-wise iterative process, the volume-based constraints at each level were then further reduced until known target coverages were compromised. The newly developed plans were then compared to the original clinically approved plans using paired student t-testing. Results: All 24 newly optimized treatment plans maintained initial plan quality as compared to the approved plans, and the 98% prescription dose coverage to the CTV’s were not compromised. Representative DVH comparison is shown in FIGURE 1. The contralateral parotid doses were reduced at all levels of interest when systematic constraints were applied to V10, V20, V30 and V40Gy (All P<0.0001; TABLE 1). Overall, the mean contralateral parotid doses were reduced by 2.26 Gy on average, a ∼13% relative improvement. Conclusion: Applying systematic and volume-based contralateral parotid constraints for IMPT planning significantly reduced the dose at all dosimetric levels for patients with base of tongue cancer.« less
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
Gent, Malcolm Richard; Menendez, Mario; Toraño, Javier; Torno, Susana
2011-06-01
It is demonstrated that substantial reductions in plastics presently disposed of in landfills can be achieved by cyclone density media separation (DMS). In comparison with the size fraction of plastics presently processed by industrial density separations (generally 6.4 to 9.5 mm), cyclone DMS methods are demonstrated to effectively process a substantially greater range of particle sizes (from 0.5 up to 120 mm). The purities of plastic products and recoveries obtained with a single stage separation using a cylindrical cyclone are shown to attain virtually 100% purity and recoveries >99% for high-density fractions and >98% purity and recoveries were obtained for low-density products. Four alternative schemas of multi-stage separations are presented and analyzed as proposed methods to obtain total low- and high-density plastics fraction recoveries while maintaining near 100% purities. The results of preliminary tests of two of these show that the potential for processing product purities and recoveries >99.98% of both density fractions are indicated. A preliminary economic comparison of capital costs of DMS systems suggests cyclone DMS methods to be comparable with other DMS processes even if the high volume capacity for recycling operations of these is not optimized.
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
Barchuk, A A; Podolsky, M D; Tarakanov, S A; Kotsyuba, I Yu; Gaidukov, V S; Kuznetsov, V I; Merabishvili, V M; Barchuk, A S; Levchenko, E V; Filochkina, A V; Arseniev, A I
2015-01-01
This review article analyzes data of literature devoted to the description, interpretation and classification of focal (nodal) changes in the lungs detected by computed tomography of the chest cavity. There are discussed possible criteria for determining the most likely of their character--primary and metastatic tumor processes, inflammation, scarring, and autoimmune changes, tuberculosis and others. Identification of the most characteristic, reliable and statistically significant evidences of a variety of pathological processes in the lungs including the use of modern computer-aided detection and diagnosis of sites will optimize the diagnostic measures and ensure processing of a large volume of medical data in a short time.
Global optimization framework for solar building design
NASA Astrophysics Data System (ADS)
Silva, N.; Alves, N.; Pascoal-Faria, P.
2017-07-01
The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.
NASA Astrophysics Data System (ADS)
Ünsal, Ismail; Hama-Saleh, R.; Sviridov, Alexander; Bambach, Markus; Weisheit, A.; Schleifenbaum, J. H.
2018-05-01
New technological challenges like electro-mobility pose an increasing demand for cost-efficient processes for the production of product variants. This demand opens the possibility to combine established die-based manufacturing methods and innovative, dieless technologies like additive manufacturing [1, 2]. In this context, additive manufacturing technologies allow for the weight-efficient local reinforcement of parts before and after forming, enabling manufacturers to produce product variants from series parts [3]. Previous work by the authors shows that the optimal shape of the reinforcing structure can be determined using sizing optimization. Sheet metal parts can then be reinforced using laser metal deposition. The material used is a pearlite-reduced, micro-alloyed steel (ZE 630). The aim of this paper is to determine the effect of the additive manufacturing process on the material behavior and the mechanical properties of the base material and the resulting composite material. The parameters of the AM process are optimized to reach similar material properties in the base material and the build-up volume. A metallographic analysis of the parts is presented, where the additive layers, the base material and also the bonding between the additive layers and the base material are analyzed. The paper shows the feasibility of the approach and details the resulting mechanical properties and performance.
Alam, Md Sabir; Garg, Arun; Pottoo, Faheem Hyder; Saifullah, Mohammad Khalid; Tareq, Abu Izneid; Manzoor, Ovais; Mohsin, Mohd; Javed, Md Noushad
2017-11-01
Due to unique inherent catalytic characteristics of different size, shape and surface functionalized gold nanoparticles, their potential applications, are being explored in various fields such as drug delivery, biosensor, diagnosis and theranostics. However conventional process for synthesis of these metallic nanoparticles utilizes toxic reagents as reducing agents, additional capping agent for stability as well as surface functionalization for drug delivery purposes. Hence, in this work suitability of gum Ghatti for reducing, capping and surface functionalization during the synthesis of stable Gold nanoparticles were duly explored. Role and impact of key process variables i.e. volume of chloroauric acid solution, gum solution and temperature at their respective three different levels, as well as mechanism of formation of optimized gold nanoparticles were also investigated using Box- Behnken design. These novel synthesized optimized Gold nanoparticles were further characterized by UV spectrophotometer for its surface plasmon resonance (SPR) at around ∼530nm, dynamic light scattering (DLS) for its hydrodynamic size (112.5nm), PDI (0.222) and zeta potential (-21.3mV) while, transmission electron microscopy (TEM) further revealed surface geometry of these nanoparticles being spherical in shape. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, X; Wang, J; Hu, W
Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans weremore » selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has great potential to make the treatment planning process more efficient.« less
CrossTalk, The Journal of Defense Software Engineering. Volume 27, Number 3. May/June 2014
2014-06-01
field of software engineering. by Delores M. Etter, Jennifer Webb, and John Howard The Problem of Prolific Process What is the optimal amount and...Programming Will Never Be Obsolete The creativity of software developers will always be needed to solve problems of the future and to then translate those...utilized to address some of the complex problems associated with biometric database construction. 1. A Next Generation Multispectral Iris Biometric
NASA Technical Reports Server (NTRS)
Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.
2013-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.
Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco
2013-08-15
In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.
A Goal Programming Optimization Model for The Allocation of Liquid Steel Production
NASA Astrophysics Data System (ADS)
Hapsari, S. N.; Rosyidi, C. N.
2018-03-01
This research was conducted in one of the largest steel companies in Indonesia which has several production units and produces a wide range of steel products. One of the important products in the company is billet steel. The company has four Electric Arc Furnace (EAF) which produces liquid steel which must be procesed further to be billet steel. The billet steel plant needs to make their production process more efficient to increase the productvity. The management has four goals to be achieved and hence the optimal allocation of the liquid steel production is needed to achieve those goals. In this paper, a goal programming optimization model is developed to determine optimal allocation of liquid steel production in each EAF, to satisfy demand in 3 periods and the company goals, namely maximizing the volume of production, minimizing the cost of raw materials, minimizing maintenance costs, maximizing sales revenues, and maximizing production capacity. From the results of optimization, only maximizing production capacity goal can not achieve the target. However, the model developed in this papare can optimally allocate liquid steel so the allocation of production does not exceed the maximum capacity of the machine work hours and maximum production capacity.
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.
Ajala, E O; Aberuagba, F; Olaniyan, A M; Onifade, K R
2016-01-01
Shea butter (SB) was extracted from its kernel by using n-hexane as solvent in an optimization study. This was to determine the optima operating variables that would give optimum yield of SB and to study the effect of solvent on the physico-chemical properties and chemical composition of SB extracted using n-hexane. A Box-behnken response surface methodology (RSM) was used for the optimization study while statistical analysis using ANOVA was used to test the significance of the variables for the process. The variables considered for this study were: sample weight (g), solvent volume (ml) and extraction time (min). The physico-chemical properties of SB extracted were determined using standard methods and Fourier Transform Infrared Spectroscopy (FTIR) for the chemical composition. The results of RSM analysis showed that the three variables investigated have significant effect (p < 0.05) on the %yield of SB, with R(2) - 0.8989 which showed good fitness of a second-order model. Based on this model, optima operating variables for the extraction process were established as: sample weight of 30.04 g, solvent volume of 346.04 ml and extraction time of 40 min, which gave 66.90 % yield of SB. Furthermore, the result of the physico-chemical properties obtained for the shea butter extracted using traditional method (SBT) showed that it is a more suitable raw material for food, biodiesel production, cosmetics, medicinal and pharmaceutical purposes than shea butter extracted using solvent extraction method (SBS). Fourier Transform Infrared Spectroscopy (FTIR) results obtained for the two samples were similar to what was obtainable from other vegetable oil.
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Decreasing effect and mechanism of moisture content of sludge biomass by granulation process.
Zhao, Xia; Xu, Hao; Shen, Jimin; Yu, Bo; Wang, Xiaochun
2016-01-01
Disposal of a high volume of sludge significantly raises water treatment costs. A method for cultivating aerobic granules in a sequencing batch airlift bioreactor to significantly produce lower moisture content is described. Results indicate that optimization of settling time and control of the shear stresses acted on the granules. The diameter of the granule was within the range of 1.0-4.0 mm, and its sludge volume index was stabilized at 40-50 mL g(-1). Its specific gravity was increased by a factor of 0.0392, and specific oxygen uptake rate reached 60.126 mg h(-1) g(-1). Moreover, the percentage of its moisture content in the reactor ranged from 96.73% to 97.67%, and sludge volume was reduced to approximately 60%, greatly due to the presence of extracellular polymeric substances in the granules, as well as changes in their hydrophobic protein content. The removal rate of chemical oxygen demand and [Formula: see text] reaches up to 92.6% and 98%, respectively. The removal rates of total phosphorus is over 85%. Therefore, aerobic granular sludge process illustrates a good biological activity.
NASA Astrophysics Data System (ADS)
Rakhmangulov, Aleksandr; Muravev, Dmitri; Mishkurov, Pavel
2016-11-01
The issue of operative data reception on location and movement of railcars is significant the constantly growing requirements of the provision of timely and safe transportation. The technical solution for efficiency improvement of data collection on rail rolling stock is the implementation of an identification system. Nowadays, there are several such systems, distinguished in working principle. In the authors' opinion, the most promising for rail transportation is the RFID technology, proposing the equipping of the railway tracks by the stationary points of data reading (RFID readers) from the onboard sensors on the railcars. However, regardless of a specific type and manufacturer of these systems, their implementation is affiliated with the significant financing costs for large, industrial, rail transport systems, owning the extensive network of special railway tracks with a large number of stations and loading areas. To reduce the investment costs for creation, the identification system of rolling stock on the special railway tracks of industrial enterprises has developed the method based on the idea of priority installation of the RFID readers on railway hauls, where rail traffic volumes are uneven in structure and power, parameters of which is difficult or impossible to predict on the basis of existing data in an information system. To select the optimal locations of RFID readers, the mathematical model of the staged installation of such readers has developed depending on the non-uniformity value of rail traffic volumes, passing through the specific railway hauls. As a result of that approach, installation of the numerous RFID readers at all station tracks and loading areas of industrial railway stations might be not necessary,which reduces the total cost of the rolling stock identification and the implementation of the method for optimal management of transportation process.
[Optimization of isolation of the concentrate of stem cells from the umbilical blood].
Tiumina, O V; Savchenko, V G; Gusarova, G I; Pavlov, V V; Zharkov, M N; Volchkov, S E; Rossiev, V A; Gridasov, G N
2005-01-01
To study correlations between body mass and height of the newborn, Apgar scale estimates, gestation time, volume of the obtained umbilical blood (UB), number of nucleated cells (NC); to compare manual and automatic modes of UB processing. 330 procurements of UB were made, 230 (69.7%) samples were frozen. Comparison of 2 techniques of UB processing was made in 73 cases of double centrifugation with hydroxyethylstarch (HES) and 47 cases of using separator Sepax (Biosafe, Switzerland). Blood cell count before and after UB processing and number of CD34+ cells were estimated. A correlation analysis was made of dependence of the volume of 102 samples of UB on the weight (r = 0.268, p < 0.01) and height of the fetus (r = 0.203, p < 0.05), estimation by Apgar scale (r = -0.092, p < 0.1) and gestation term (r = -0.003, p > 0.1); analysis of the number of NC dependence on the volume of UB (r = 0.102 p < 0.1), mass (r = 0.073 p > 0.1) and fetus height (r = 0.121 p > 0.1), gestation time (r = 0.159 p > 0.1), Apgar scale assessment (r = -0.174 p > 0.1). In manual UB management NC yield made up 71.9 +/- 6.7%, in automatic--81 +/- 8.0% (p < 0.05). Percent of erythrocytes removal was 73 +/- 5.7% and 80.5 +/- 6.1% (p < 0.05), respectively. A weak correlation was found between UB volume, mass and height of the fetus. The number of NC in UB depends on none of the parameters. Automatic processing of UB provides a greater release of NC and better elimination of erythrocytes in minimal risk of contamination.
Plasma clots gelled by different amounts of calcium for stem cell delivery.
Gessmann, Jan; Seybold, Dominik; Peter, Elvira; Schildhauer, Thomas Armin; Köller, Manfred
2013-01-01
Freshly prepared autologous plasma clots may serve as a carrier matrix for expanded multipotent mesenchymal stromal cells (MSCs) or bone marrow cells. By varying the calcium concentration, plasma clots with different properties can be produced. The purpose of this in vitro study was to determine the optimal calcium concentrations for the clotting process, intra-clot cell viability, and clot lysis. Different plasma clots were prepared by adding an equal volume of RPMI1640 (with or without MSCs) to citrate plasma (either containing platelets or platelet-free). Clotting was initiated by the addition of CaCl(2) (10 g/100 ml H(2)O, 10 % solution). The final concentration of CaCl(2) ranged from 1 to 10 % by volume of plasma. Viability and distribution of the MSCs were analysed by calcein-AM/propidium iodide staining. MSC-embedded plasma clots were dissolved with trypsin (0.25 %), and recovered cells were further incubated for 1 week under cell culture conditions. The viability of MSCs embedded in clots formed by the addition of 1-8 % by volume CaCl2 was not affected by incubation of up to 1 week. In contrast, clots produced by higher volumes of CaCl(2) solutions (9-10 % by volume of plasma) showed decreased numbers of viable cells. Intra-clot cell proliferation was highest in clots produced by addition of 5 % CaCl(2) by plasma volume. Osteocalcin release was not influenced in platelet-free plasma but decreased in platelet-containing plasma. Morphological analysis of stained recovered MSCs revealed that lysis of the plasma clot did not affect cell morphology or subsequent spontaneous proliferation. Clot formation and clot stability can be controlled by changing the concentration of CaCl(2) added to plasma. The addition of 5 % CaCl(2) produced a plasma clot with optimal results for stem cell delivery.
Configuration optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David
1991-01-01
The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.
Moon, Chung-Man; Shin, Il-Seon; Jeong, Gwang-Woo
2017-02-01
Background Non-invasive imaging markers can be used to diagnose Alzheimer's disease (AD) in its early stages, but an optimized quantification analysis to measure the brain integrity has been less studied. Purpose To evaluate white matter volume change and its correlation with neuropsychological scales in patients with AD using a diffeomorphic anatomical registration through exponentiated lie algebra (DARTEL)-based voxel-based morphometry (VBM). Material and Methods The 21 participants comprised 11 patients with AD and 10 age-matched healthy controls. High-resolution magnetic resonance imaging (MRI) data were processed by VBM analysis based on DARTEL algorithm. Results The patients showed significant white matter volume reductions in the posterior limb of the internal capsule, cerebral peduncle of the midbrain, and parahippocampal gyrus compared to healthy controls. In correlation analysis, the parahippocampal volume was positively correlated with the Korean-mini mental state examination score in AD. Conclusion This study provides an evidence for localized white matter volume deficits in conjunction with cognitive dysfunction in AD. These findings would be helpful to understand the neuroanatomical mechanisms in AD and to robust the diagnostic accuracy for AD.
Wang, Yongjiang; Pang, Li; Liu, Xinyu; Wang, Yuansheng; Zhou, Kexun; Luo, Fei
2016-04-01
A comprehensive model of thermal balance and degradation kinetics was developed to determine the optimal reactor volume and insulation material. Biological heat production and five channels of heat loss were considered in the thermal balance model for a representative reactor. Degradation kinetics was developed to make the model applicable to different types of substrates. Simulation of the model showed that the internal energy accumulation of compost was the significant heat loss channel, following by heat loss through reactor wall, and latent heat of water evaporation. Lower proportion of heat loss occurred through the reactor wall when the reactor volume was larger. Insulating materials with low densities and low conductive coefficients were more desirable for building small reactor systems. Model developed could be used to determine the optimal reactor volume and insulation material needed before the fabrication of a lab-scale composting system. Copyright © 2016 Elsevier Ltd. All rights reserved.
4D Optimization of Scanned Ion Beam Tracking Therapy for Moving Tumors
Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph
2014-01-01
Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking. PMID:24889215
4D optimization of scanned ion beam tracking therapy for moving tumors
NASA Astrophysics Data System (ADS)
Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph
2014-07-01
Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking.
Optimization of Sour Cherry Juice Spray Drying as Affected by Carrier Material and Temperature
Zorić, Zoran; Pedisić, Sandra; Dragović-Uzelac, Verica
2016-01-01
Summary Response surface methodology was applied for optimization of the sour cherry Marasca juice spray drying process with 20, 30 and 40% of carriers maltodextrin with dextrose equivalent (DE) value of 4–7 and 13–17 and gum arabic, at three drying temperatures: 150, 175 and 200 °C. Increase in carrier mass per volume ratio resulted in lower moisture content and powder hygroscopicity, higher bulk density, solubility and product yield. Higher temperatures decreased the moisture content and bulk density of powders. Temperature of 200 °C and 27% of maltodextrin with 4–7 DE were found to be the most suitable for production of sour cherry Marasca powder. PMID:28115901
Tan, Joo Shun; Abbasiliasi, Sahar; Kadkhodaei, Saeid; Tam, Yew Joon; Tang, Teck-Kim; Lee, Yee-Ying; Ariff, Arbakariya B
2018-01-04
Demand for high-throughput bioprocessing has dramatically increased especially in the biopharmaceutical industry because the technologies are of vital importance to process optimization and media development. This can be efficiently boosted by using microtiter plate (MTP) cultivation setup embedded into an automated liquid-handling system. The objective of this study was to establish an automated microscale method for upstream and downstream bioprocessing of α-IFN2b production by recombinant Escherichia coli. The extraction performance of α-IFN2b by osmotic shock using two different systems, automated microscale platform and manual extraction in MTP was compared. The amount of α-IFN2b extracted using automated microscale platform (49.2 μg/L) was comparable to manual osmotic shock method (48.8 μg/L), but the standard deviation was 2 times lower as compared to manual osmotic shock method. Fermentation parameters in MTP involving inoculum size, agitation speed, working volume and induction profiling revealed that the fermentation conditions for the highest production of α-IFN2b (85.5 μg/L) was attained at inoculum size of 8%, working volume of 40% and agitation speed of 1000 rpm with induction at 4 h after the inoculation. Although the findings at MTP scale did not show perfect scalable results as compared to shake flask culture, but microscale technique development would serve as a convenient and low-cost solution in process optimization for recombinant protein.
Xia, Yu; Wang, Yinhang; Li, Wei; Ma, Chunhui; Liu, Shouxin
2017-12-01
Cavitation hybrid rotation, which was and is still looked upon as an unavoidable nuisance in the flow systems, for extraction processing intensification of active chemical compounds from natural products. In this study, a homogenization-assisted cavitation hybrid rotation extraction method was applied to extract dihydroquercetin (DHQ) from larch (Larix gmelinii) wood root. The extraction parameters were optimized in single factor experiments with the DHQ extraction yields as the response values. The optimum conditions were as follows: number of extractions, three; ethanol volume fraction for the extraction, 60%; liquid-solid ratio for homogenization, 10mL/g; homogenization time, 8min; liquid-solid ratio for cavitation extraction, 9mL/g, and cavitation extraction time, 35min. Under these conditions, the DHQ content in extract was 4.50±0.02mg/g, and the extraction efficiency was higher than those of traditional techniques. Cavitation can be effectively used to improve the extraction rate by increasing the mass transfer rates and possible rupture of cell wall due to formation of microcavities leading to higher product yields with reduced processing time and solvent consumption. After the extraction process, macroporous resin column chromatography was used to concentrate and purify the DHQ. Three resins were selected from fifteen macroporous resins for further investigation of their performance. Among these resins, AB-8 resin exhibited relatively better adsorption capacities and desorption ratios for DHQ. The ethanol volume fraction of the solutions for sample loading and desorption, and flow rates for loading and desorption were optimized for the macroporous resin column chromatography. Copyright © 2017 Elsevier B.V. All rights reserved.
Leveraging human oversight and intervention in large-scale parallel processing of open-source data
NASA Astrophysics Data System (ADS)
Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.
2015-05-01
The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.
NASA Astrophysics Data System (ADS)
Morén, B.; Larsson, T.; Carlsson Tedgren, Å.
2018-03-01
High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.
Larabell, Carolyn A.; Le Gros, Mark A.; McQueen, David M.; Peskin, Charles S.
2014-01-01
In this work, we examine how volume exclusion caused by regions of high chromatin density might influence the time required for proteins to find specific DNA binding sites. The spatial variation of chromatin density within mouse olfactory sensory neurons is determined from soft X-ray tomography reconstructions of five nuclei. We show that there is a division of the nuclear space into regions of low-density euchromatin and high-density heterochromatin. Volume exclusion experienced by a diffusing protein caused by this varying density of chromatin is modeled by a repulsive potential. The value of the potential at a given point in space is chosen to be proportional to the density of chromatin at that location. The constant of proportionality, called the volume exclusivity, provides a model parameter that determines the strength of volume exclusion. Numerical simulations demonstrate that the mean time for a protein to locate a binding site localized in euchromatin is minimized for a finite, nonzero volume exclusivity. For binding sites in heterochromatin, the mean time is minimized when the volume exclusivity is zero (the protein experiences no volume exclusion). An analytical theory is developed to explain these results. The theory suggests that for binding sites in euchromatin there is an optimal level of volume exclusivity that balances a reduction in the volume searched in finding the binding site, with the height of effective potential barriers the protein must cross during the search process. PMID:23955281
Ground Vehicle System Integration (GVSI) and Design Optimization Model.
1996-07-30
number of stowed kills Same basic load lasts longer range Gun/ammo parameters impact system weight, under - armor volume requirements Round volume...internal volume is reduced, the model assumes that the crew’s ability to operate while under armor will be impaired. If the size of a vehicle crew is...changing swept volume will alter under armor volume requirements for the total system; if system volume is fixed, changing swept volume will
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buddadee, Bancha; Wirojanagud, Wanpen; Watts, Daniel J.
In this paper, a multi-objective optimization model is proposed as a tool to assist in deciding for the proper utilization scheme of excess bagasse produced in sugarcane industry. Two major scenarios for excess bagasse utilization are considered in the optimization. The first scenario is the typical situation when excess bagasse is used for the onsite electricity production. In case of the second scenario, excess bagasse is processed for the offsite ethanol production. Then the ethanol is blended with an octane rating of 91 gasoline by a portion of 10% and 90% by volume respectively and the mixture is used asmore » alternative fuel for gasoline vehicles in Thailand. The model proposed in this paper called 'Environmental System Optimization' comprises the life cycle impact assessment of global warming potential (GWP) and the associated cost followed by the multi-objective optimization which facilitates in finding out the optimal proportion of the excess bagasse processed in each scenario. Basic mathematical expressions for indicating the GWP and cost of the entire process of excess bagasse utilization are taken into account in the model formulation and optimization. The outcome of this study is the methodology developed for decision-making concerning the excess bagasse utilization available in Thailand in view of the GWP and economic effects. A demonstration example is presented to illustrate the advantage of the methodology which may be used by the policy maker. The methodology developed is successfully performed to satisfy both environmental and economic objectives over the whole life cycle of the system. It is shown in the demonstration example that the first scenario results in positive GWP while the second scenario results in negative GWP. The combination of these two scenario results in positive or negative GWP depending on the preference of the weighting given to each objective. The results on economics of all scenarios show the satisfied outcomes.« less
Optimizing hydraulic fracture design in the diatomite formation, Lost Hills Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, D.G.; Klins, M.A.; Manrique, J.F.
1996-12-31
Since 1988, over 1.3 billion pounds of proppant have been placed in the Lost Hills Field of Kern County. California in over 2700 hydraulic fracture treatments involving investments of about $150 million. In 1995, systematic reevaluation of the standard, field trial-based fracture design began. Reservoir, geomechanical, and hydraulic fracture characterization; production and fracture modeling; sensitivity analysis; and field test results were integrated to optimize designs with regard to proppant volume, proppant ramps, and perforating strategy. The results support a reduction in proppant volume from 2500 to 1700 lb/ft which will save about $50,000 per well, totalling over $3 million permore » year. Vertical coverage was found to be a key component of fracture quality which could be optimized by eliminating perforations from lower stress intervals, reducing the total number of perforations, and reducing peak slurry loading from 16 to 12 ppa. A relationship between variations in lithology, pore pressure, and stress was observed. Point-source, perforating strategies were investigated and variable multiple fracture behavior was observed. The discussed approach has application in areas where stresses are variable; pay zones are thick; hydraulic fracture design is based primarily on empirical, trial-and-error field test results; and effective, robust predictive models involving real-data feedback have not been incorporated into the design improvement process.« less
NASA Technical Reports Server (NTRS)
1972-01-01
The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.
Walter, James S; Posluszny, Joseph; Dieter, Raymond; Dieter, Robert S; Sayers, Scott; Iamsakul, Kiratipath; Staunton, Christine; Thomas, Donald; Rabbat, Mark; Singh, Sanjay
2018-05-01
To optimize maximal respiratory responses with surface stimulation over abdominal and upper thorax muscles and using a 12-Channel Neuroprosthetic Platform. Following instrumentation, six anesthetized adult canines were hyperventilated sufficiently to produce respiratory apnea. Six abdominal tests optimized electrode arrangements and stimulation parameters using bipolar sets of 4.5 cm square electrodes. Tests in the upper thorax optimized electrode locations, and forelimb moment was limited to slight-to-moderate. During combined muscle stimulation tests, the upper thoracic was followed immediately by abdominal stimulation. Finally, a model of glottal closure for cough was conducted with the goal of increased peak expiratory flow. Optimized stimulation of abdominal muscles included three sets of bilateral surface electrodes located 4.5 cm dorsal to the lateral line and from the 8 th intercostal space to caudal to the 13 th rib, 80 or 100 mA current, and 50 Hz stimulation frequency. The maximal expired volume was 343 ± 23 ml (n=3). Optimized upper thorax stimulation included a single bilateral set of electrodes located over the 2 nd interspace, 60 to 80 mA, and 50 Hz. The maximal inspired volume was 304 ± 54 ml (n=4). Sequential stimulation of the two muscles increased the volume to 600 ± 152 ml (n=2), and the glottal closure maneuver increased the flow. Studies in an adult canine model identified optimal surface stimulation methods for upper thorax and abdominal muscles to induce sufficient volumes for ventilation and cough. Further study with this neuroprosthetic platform is warranted.
Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley
2013-07-08
The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.
NASA Astrophysics Data System (ADS)
Heidari Haratmeh, B.; Rai, A.; Minsker, B. S.
2016-12-01
Green Infrastructure (GI) has become widely known as a sustainable solution for stormwater management in urban environments. Despite more recognition and acknowledgment, researchers and practitioners lack clear and explicit guidelines on how GI practices should be implemented in urban settings. This study is developing a noisy-based multi-objective, multi-scaled genetic algorithm that determines optimal GI networks for environmental, economic and social objectives. The methodology accounts for uncertainty in modeling results and is designed to perform at sub-watershed as well as patch scale using two different simulation models, SWMM and RHESSys, in a Cloud-based implementation using a Web interface. As an initial case study, a semi-urbanized watershed— DeadRun 5— in Baltimore County, Maryland, is selected. The objective of the study is to minimize life cycle cost, maximize human preference for human well-being and the difference between pre-development hydrographs generated from current rainfall events and design storms, as well as those that result from proposed GI scenarios. Initial results for DeadRun5 watershed suggest that placing GI in the proximity of the watershed outlet optimizes life cycle cost, stormwater volume, and peak flow capture. The framework can easily present outcomes of GI design scenarios to both designers and local stakeholders, and future plans include receiving feedback from users on candidate designs, and interactively updating optimal GI network designs in a crowd-sourced design process. This approach can also be helpful in deriving design guidelines that better meet stakeholder needs.
Challenges and opportunities in the manufacture and expansion of cells for therapy.
Maartens, Joachim H; De-Juan-Pardo, Elena; Wunner, Felix M; Simula, Antonio; Voelcker, Nicolas H; Barry, Simon C; Hutmacher, Dietmar W
2017-10-01
Laboratory-based ex vivo cell culture methods are largely manual in their manufacturing processes. This makes it extremely difficult to meet regulatory requirements for process validation, quality control and reproducibility. Cell culture concepts with a translational focus need to embrace a more automated approach where cell yields are able to meet the quantitative production demands, the correct cell lineage and phenotype is readily confirmed and reagent usage has been optimized. Areas covered: This article discusses the obstacles inherent in classical laboratory-based methods, their concomitant impact on cost-of-goods and that a technology step change is required to facilitate translation from bed-to-bedside. Expert opinion: While traditional bioreactors have demonstrated limited success where adherent cells are used in combination with microcarriers, further process optimization will be required to find solutions for commercial-scale therapies. New cell culture technologies based on 3D-printed cell culture lattices with favourable surface to volume ratios have the potential to change the paradigm in industry. An integrated Quality-by-Design /System engineering approach will be essential to facilitate the scaled-up translation from proof-of-principle to clinical validation.
Ratcliffe, Elizabeth; Hourd, Paul; Guijarro-Leach, Juan; Rayment, Erin; Williams, David J; Thomas, Robert J
2013-01-01
Commercial regenerative medicine will require large quantities of clinical-specification human cells. The cost and quality of manufacture is notoriously difficult to control due to highly complex processes with poorly defined tolerances. As a step to overcome this, we aimed to demonstrate the use of 'quality-by-design' tools to define the operating space for economic passage of a scalable human embryonic stem cell production method with minimal cell loss. Design of experiments response surface methodology was applied to generate empirical models to predict optimal operating conditions for a unit of manufacture of a previously developed automatable and scalable human embryonic stem cell production method. Two models were defined to predict cell yield and cell recovery rate postpassage, in terms of the predictor variables of media volume, cell seeding density, media exchange and length of passage. Predicted operating conditions for maximized productivity were successfully validated. Such 'quality-by-design' type approaches to process design and optimization will be essential to reduce the risk of product failure and patient harm, and to build regulatory confidence in cell therapy manufacturing processes.
Optimizing the recovery of copper from electroplating rinse bath solution by hollow fiber membrane.
Oskay, Kürşad Oğuz; Kul, Mehmet
2015-01-01
This study aimed to recover and remove copper from industrial model wastewater solution by non-dispersive solvent extraction (NDSX). Two mathematical models were developed to simulate the performance of an integrated extraction-stripping process, based on the use of hollow fiber contactors using the response surface method. The models allow one to predict the time dependent efficiencies of the two phases involved in individual extraction or stripping processes. The optimal recovery efficiency parameters were determined as 227 g/L of H2SO4 concentration, 1.22 feed/strip ratio, 450 mL/min flow rate (115.9 cm/min. flow velocity) and 15 volume % LIX 84-I concentration in 270 min by central composite design (CCD). At these optimum conditions, the experimental value of recovery efficiency was 95.88%, which was in close agreement with the 97.75% efficiency value predicted by the model. At the end of the process, almost all the copper in the model wastewater solution was removed and recovered as CuSO4.5H2O salt, which can be reused in the copper electroplating industry.
System for sensing droplet formation time delay in a flow cytometer
Van den Engh, Ger; Esposito, Richard J.
1997-01-01
A droplet flow cytometer system which includes a system to optimize the droplet formation time delay based on conditions actually experienced includes an automatic droplet sampler which rapidly moves a plurality of containers stepwise through the droplet stream while simultaneously adjusting the droplet time delay. Through the system sampling of an actual substance to be processed can be used to minimize the effect of the substances variations or the determination of which time delay is optimal. Analysis such as cell counting and the like may be conducted manually or automatically and input to a time delay adjustment which may then act with analysis equipment to revise the time delay estimate actually applied during processing. The automatic sampler can be controlled through a microprocessor and appropriate programming to bracket an initial droplet formation time delay estimate. When maximization counts through volume, weight, or other types of analysis exists in the containers, the increment may then be reduced for a more accurate ultimate setting. This may be accomplished while actually processing the sample without interruption.
Design of a novel automated methanol feed system for pilot-scale fermentation of Pichia pastoris.
Hamaker, Kent H; Johnson, Daniel C; Bellucci, Joseph J; Apgar, Kristie R; Soslow, Sherry; Gercke, John C; Menzo, Darrin J; Ton, Christopher
2011-01-01
Large-scale fermentation of Pichia pastoris requires a large volume of methanol feed during the induction phase. However, a large volume of methanol feed is difficult to use in the processing suite because of the inconvenience of constant monitoring, manual manipulation steps, and fire and explosion hazards. To optimize and improve safety of the methanol feed process, a novel automated methanol feed system has been designed and implemented for industrial fermentation of P. pastoris. Details of the design of the methanol feed system are described. The main goals of the design were to automate the methanol feed process and to minimize the hazardous risks associated with storing and handling large quantities of methanol in the processing area. The methanol feed system is composed of two main components: a bulk feed (BF) system and up to three portable process feed (PF) systems. The BF system automatically delivers methanol from a central location to the portable PF system. The PF system provides precise flow control of linear, step, or exponential feed of methanol to the fermenter. Pilot-scale fermentations with linear and exponential methanol feeds were conducted using two Mut(+) (methanol utilization plus) strains, one expressing a recombinant therapeutic protein and the other a monoclonal antibody. Results show that the methanol feed system is accurate, safe, and efficient. The feed rates for both linear and exponential feed methods were within ± 5% of the set points, and the total amount of methanol fed was within 1% of the targeted volume. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
NASA Astrophysics Data System (ADS)
Vivek, Tiwary; Arunkumar, P.; Deshpande, A. S.; Vinayak, Malik; Kulkarni, R. M.; Asif, Angadi
2018-04-01
Conventional investment casting is one of the oldest and most economical manufacturing techniques to produce intricate and complex part geometries. However, investment casting is considered economical only if the volume of production is large. Design iterations and design optimisations in this technique proves to be very costly due to time and tooling cost for making dies for producing wax patterns. However, with the advent of Additive manufacturing technology, plastic patterns promise a very good potential to replace the wax patterns. This approach can be very useful for low volume production & lab requirements, since the cost and time required to incorporate the changes in the design is very low. This research paper discusses the steps involved for developing polymer nanocomposite filaments and checking its suitability for investment castings. The process parameters of the 3D printer machine are also optimized using the DOE technique to obtain mechanically stronger plastic patterns. The study is done to develop a framework for rapid investment casting for lab as well as industrial requirements.
Zhang, S F; Zhang, L L; Luo, K; Sun, Z X; Mei, X X
2014-04-01
The separation properties of the aluminium-plastic laminates in postconsumer Tetra Pak structure were studied in this present work. The organic solvent blend of benzene-ethyl alcohol-water was used as the separation reagent. Then triangle coordinate figure analysis was taken to optimize the volume proportion of various components in the separating agent and separation process. And the separation temperature of aluminium-plastic laminates was determined by the separation time, efficiency, and total mass loss of products. The results show that cost-efficient separations perform best with low usage of solvents at certain temperatures, for certain times, and within a certain range of volume proportions of the three components in the solvent agent. It is also found that similar solubility parameters of solvents and polyethylene adhesives (range 26.06-34.85) are a key factor for the separation of the aluminium-plastic laminates. Such multisolvent processes based on the combined-system concept will be vital to applications in the recycling industry.
NASA Astrophysics Data System (ADS)
Qi, Y. L.; Xu, B. Y.; Cai, S. L.
2006-12-01
To control fuel injection, optimize combustion and reduce emissions for LPG (liquefied petroleum gas) engines, it is necessary and important to understand the characteristics of LPG sprays. The present work investigates the geometry of LPG sprays, including spray tip penetration, spray angle, projected spray area and spray volume, by using schlieren photography and digital image processing techniques. Two types of single nozzle injectors were studied, with the same nozzle diameter, but one with and one without a double-hole flow-split head. A code developed to analyse the results directly from the digitized images is shown to be more accurate and efficient than manual measurement and analysis. Test results show that a higher injection pressure produces a longer spray tip penetration, a larger projected spray area and spray volume, but a smaller spray cone angle. The injector with the double-hole split-head nozzle produces better atomization and shorter tip penetration at medium and late injection times, but longer tip penetration in the early stage.
2010-05-01
alternative fuel from halophyte (Salicornia oil from sea plants) was also produced by the Syntroleum Corporation and termed R- 8X. Syntroleum processed...these bio- oils without catalyst change-out or processing optimization. Only a portion of the fit for purpose and characterization testing was...jet fuel, up to 50 volume %, just as F-T SPK is allowed to be used in MIL-DTL-83133F. b) The R-8 feedstock of fats, oils , and grease (FOG) was
Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn
2017-06-01
To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, M; Fontenot, J; Heins, D
2016-06-15
Purpose: To evaluate two dose optimization strategies for maintaining target volume coverage of inversely-planned post mastectomy radiotherapy (PMRT) plans during patient motion. Methods: Five patients previously treated with VMAT for PMRT at our clinical were randomly selected for this study. For each patient, two plan optimization strategies were compared. Plan 1 was optimized to a volume that included the physician’s planning target volume (PTV) plus an expansion up to 0.3 cm from the bolus surface. Plan 2 was optimized to the PTV plus an expansion up to 0.3 cm from the patient surface (i.e., not extending into the bolus). VMATmore » plans were optimized to deliver 95% of the prescription to 95% of the PTV while sparing organs at risk based on clinical dose limits. PTV coverage was then evaluated following the simulation of patient shifts by 1.0 cm in the anterior and posterior directions using the treatment planning system. Results: Posterior patient shifts produced a difference in D95% of around 11% in both planning approaches from the non-shifted dose distributions. Coverage of the medial and lateral borders of the evaluation volume was reduced in both the posteriorly shifted plans (Plan 1 and Plan 2). Anterior patient shifts affected Plan 2 more than Plan 1 with a difference in D95% of 1% for Plan 1 versus 6% for Plan 2 from the non-shifted dose distributions. The least variation in PTV dose homogeneity for both shifts was obtained with Plan 1. However, all posteriorly shifted plans failed to deliver 95% of the prescription to 95% of the PTV. Whereas, only a few anteriorly shifted plans failed this criteria. Conclusion: The results of this study suggest both planning volume methods are sensitive to patient motion, but that a PTV extended into a bolus volume is slightly more robust for anterior patient shifts.« less
Volume versus value maximization illustrated for Douglas-fir with thinning
Kurt H. Riitters; J. Douglas Brodie; Chiang Kao
1982-01-01
Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.R.; Netrologic, Inc., San Diego, CA)
1988-01-01
Topics presented include integrating neural networks and expert systems, neural networks and signal processing, machine learning, cognition and avionics applications, artificial intelligence and man-machine interface issues, real time expert systems, artificial intelligence, and engineering applications. Also considered are advanced problem solving techniques, combinational optimization for scheduling and resource control, data fusion/sensor fusion, back propagation with momentum, shared weights and recurrency, automatic target recognition, cybernetics, optical neural networks.
One- and two-dimensional search of an equation of state using a newly released 2DRoptimize package
NASA Astrophysics Data System (ADS)
Jamal, M.; Reshak, A. H.
2018-05-01
A new package called 2DRoptimize has been released for performing two-dimensional searches of the equation of state (EOS) for rhombohedral, tetragonal, and hexagonal compounds. The package is compatible and available with the WIEN2k package. The 2DRoptimize package performs a convenient volume and c/a structure optimization. First, the package finds the best value for c/a and the associated energy for each volume. In the second step, it calculates the EoS. The package then finds the equation of the c/a ratio vs. volume to calculate the c/a ratio at the optimized volume. In the last stage, by using the optimized volume and c/a ratio, the 2DRoptimize package calculates a and c lattice constants for tetragonal and hexagonal compounds, as well as the a lattice constant with the α angle for rhombohedral compounds. We tested our new package based on several hexagonal, tetragonal, and rhombohedral structures, and the 2D search results for the EOS showed that this method is more accurate than 1D search. Our results agreed very well with the experimental data and they were better than previous theoretical calculations.
Maximum Work of Free-Piston Stirling Engine Generators
NASA Astrophysics Data System (ADS)
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
Batch and Continuous Ultrasound Assisted Extraction of Boldo Leaves (Peumus boldus Mol.).
Petigny, Loïc; Périno-Issartier, Sandrine; Wajsman, Joël; Chemat, Farid
2013-03-12
Vegetal extracts are widely used as primary ingredients for various products from creams to perfumes in the pharmaceutical, nutraceutic and cosmetic industries. Having concentrated and active extract is essential, as the process must extract as much soluble material as possible in a minimum time, using the least possible volume of solvent. The boldo leaves extract is of great interest for the industry as it holds a great anti-oxidant activity due to high levels of flavonoids and alkaloids such as boldine. Ultrasound Assisted Extraction (UAE) has been used to improve the efficiency of the plant extraction, reducing extraction time, increasing the concentration of the extract with the same amount of solvent and plant material. After a preliminary study, a response surface method has been used to optimize the extraction of soluble material from the plant. The results provided by the statistical analysis revealed that the optimized conditions were: sonication power 23 W/cm2 for 40 min and a temperature of 36 °C. The optimized parameters of the UAE provide a better extraction compared to a conventional maceration in terms of process time (30 min instead of 120 min), higher yield, more energy saving, cleanliness, safety and product quality.
Batch and Continuous Ultrasound Assisted Extraction of Boldo Leaves (Peumus boldus Mol.)
Petigny, Loïc; Périno-Issartier, Sandrine; Wajsman, Joël; Chemat, Farid
2013-01-01
Vegetal extracts are widely used as primary ingredients for various products from creams to perfumes in the pharmaceutical, nutraceutic and cosmetic industries. Having concentrated and active extract is essential, as the process must extract as much soluble material as possible in a minimum time, using the least possible volume of solvent. The boldo leaves extract is of great interest for the industry as it holds a great anti-oxidant activity due to high levels of flavonoids and alkaloids such as boldine. Ultrasound Assisted Extraction (UAE) has been used to improve the efficiency of the plant extraction, reducing extraction time, increasing the concentration of the extract with the same amount of solvent and plant material. After a preliminary study, a response surface method has been used to optimize the extraction of soluble material from the plant. The results provided by the statistical analysis revealed that the optimized conditions were: sonication power 23 W/cm2 for 40 min and a temperature of 36 °C. The optimized parameters of the UAE provide a better extraction compared to a conventional maceration in terms of process time (30 min instead of 120 min), higher yield, more energy saving, cleanliness, safety and product quality. PMID:23481637
Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites.
Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong
2018-03-13
Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59-60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties.
Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites
Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong
2018-01-01
Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59–60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties. PMID:29534048
Failure-probability driven dose painting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.
Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). Themore » total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.« less
Cold flow simulation of an internal combustion engine with vertical valves using layering approach
NASA Astrophysics Data System (ADS)
Martinas, G.; Cupsa, O. S.; Stan, L. C.; Arsenie, A.
2015-11-01
Complying with emission requirements and fuel consumption efficiency are the points which drive any development of internal combustion engine. Refinement of the process of combustion and mixture formation, together with in-cylinder flow refinement, is a requirement, valves and piston bowl and intake exhaust port design optimization is essential. In order to reduce the time for design optimization cycle it is used Computational Fluid Dynamics (CFD). Being time consuming and highly costly caring out of experiment using flow bench testing this methods start to become less utilized. Air motion inside the intake manifold is one of the important factors, which govern the engine performance and emission of multi-cylinder diesel engines. Any cold flow study on IC is targeting the process of identifying and improving the fluid flow inside the ports and the combustion chamber. This is only the base for an optimization process targeting to increase the volume of air accessing the combustion space and to increase the turbulence of the air at the end of the compression stage. One of the first conclusions will be that the valve diameter is a fine tradeoff between the need for a bigger diameter involving a greater mass of air filling the cylinder, and the need of a smaller diameter in order to reduce the blind zone. Here there is room for optimization studies. The relative pressure indicates a suction effect coming from the moving piston. The more the shape of the inlet port is smoother and the diameter of the piston is bigger, the aerodynamic resistance of the geometry will be smaller so that the difference of inlet port pressure and the pressure near to piston face will be smaller. Here again there is enough room for more optimization studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin
Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of themore » tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.« less
Skin-electrode circuit model for use in optimizing energy transfer in volume conduction systems.
Hackworth, Steven A; Sun, Mingui; Sclabassi, Robert J
2009-01-01
The X-Delta model for through-skin volume conduction systems is introduced and analyzed. This new model has advantages over our previous X model in that it explicitly represents current pathways in the skin. A vector network analyzer is used to take measurements on pig skin to obtain data for use in finding the model's impedance parameters. An optimization method for obtaining this more complex model's parameters is described. Results show the model to accurately represent the impedance behavior of the skin system with error of generally less than one percent. Uses for the model include optimizing energy transfer across the skin in a volume conduction system with appropriate current exposure constraints, and exploring non-linear behavior of the electrode-skin system at moderate voltages (below ten) and frequencies (kilohertz to megahertz).
Mo, Kyung; Lee, Wonbae; Kim, Moonil
2017-02-01
A modified anaerobic digestion elutriated phased treatment (MADEPT) process was developed for investigating anaerobic co-digestion of sewage sludge and food wastewater. The anaerobic digestion elutriated phased treatment (ADEPT) process is similar to a two-phase system, however, in which the effluent from a methanogenic reactor recycles into an acidogenic reactor to elutriate mainly dissolved organics. Although ADEPT could reduce reactor volume significantly, the unsolubilized solids should be wasted from the system. The MADEPT process combines thermo-alkali solubilization with ADEPT to improve anaerobic performance and to minimize the sludge disposal. It was determined that the optimal volume mixing ratio of sewage sludge and food wastewater was 4 : 1 for the anaerobic co-digestion. The removal efficiencies of total chemical oxygen demand, volatile solids, and volatile suspended solids in the MADEPT process were 73%, 70%, and 64%, respectively. However, those in the ADEPT process were only 48%, 37%, and 40%, respectively, at the same hydraulic retention time (HRT) of 7 days. The gas production of MADEPT was two times higher than that of ADEPT. The thermo-alkali solubilization increased the concentration of dissolved organics so that they could be effectively degraded in a short HRT, implying that MADEPT could improve the performance of ADEPT in anaerobic co-digestion.
Topology Optimization - Engineering Contribution to Architectural Design
NASA Astrophysics Data System (ADS)
Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2017-10-01
The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one-material problems.
Experimental investigation of the structural behavior of equine urethra.
Natali, Arturo Nicola; Carniel, Emanuele Luigi; Frigo, Alessandro; Fontanella, Chiara Giulia; Rubini, Alessandro; Avital, Yochai; De Benedictis, Giulia Maria
2017-04-01
An integrated experimental and computational investigation was developed aiming to provide a methodology for characterizing the structural response of the urethral duct. The investigation provides information that are suitable for the actual comprehension of lower urinary tract mechanical functionality and the optimal design of prosthetic devices. Experimental activity entailed the execution of inflation tests performed on segments of horse penile urethras from both proximal and distal regions. Inflation tests were developed imposing different volumes. Each test was performed according to a two-step procedure. The tubular segment was inflated almost instantaneously during the first step, while volume was held constant for about 300s to allow the development of relaxation processes during the second step. Tests performed on the same specimen were interspersed by 600s of rest to allow the recovery of the specimen mechanical condition. Results from experimental activities were statistically analyzed and processed by means of a specific mechanical model. Such computational model was developed with the purpose of interpreting the general pressure-volume-time response of biologic tubular structures. The model includes parameters that interpret the elastic and viscous behavior of hollow structures, directly correlated with the results from the experimental activities. Post-processing of experimental data provided information about the non-linear elastic and time-dependent behavior of the urethral duct. In detail, statistically representative pressure-volume and pressure relaxation curves were identified, and summarized by structural parameters. Considering elastic properties, initial stiffness ranged between 0.677 ± 0.026kPa and 0.262 ± 0.006kPa moving from proximal to distal region of penile urethra. Viscous parameters showed typical values of soft biological tissues, as τ 1 =0.153±0.018s, τ 2 =17.458 ± 1.644s and τ 1 =0.201 ± 0.085, τ 2 = 8.514 ± 1.379s for proximal and distal regions respectively. A general procedure for the mechanical characterization of the urethral duct has been provided. The proposed methodology allows identifying mechanical parameters that properly express the mechanical behavior of the biological tube. The approach is especially suitable for evaluating the influence of degenerative phenomena on the lower urinary tract mechanical functionality. The information are mandatory for the optimal design of potential surgical procedures and devices. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.
Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D
2017-04-20
Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.
Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging
NASA Astrophysics Data System (ADS)
Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.
2017-04-01
Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.
Photovoltaic module encapsulation design and materials selection, volume 1
NASA Technical Reports Server (NTRS)
Cuddihy, E.; Carroll, W.; Coulbert, C.; Gupta, A.; Liang, R. H.
1982-01-01
Encapsulation material system requirements, material selection criteria, and the status and properties of encapsulation materials and processes available are presented. Technical and economic goals established for photovoltaic modules and encapsulation systems and their status are described. Available encapsulation technology and data are presented to facilitate design and material selection for silicon flat plate photovoltaic modules, using the best materials available and processes optimized for specific power applications and geographic sites. The operational and environmental loads that encapsulation system functional requirements and candidate design concepts and materials that are identified to have the best potential to meet the cost and performance goals for the flat plate solar array project are described. Available data on encapsulant material properties, fabrication processing, and module life and durability characteristics are presented.
Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging
Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.
2017-01-01
Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461
Liu, Jun-Guo; Xing, Jian-Min; Chang, Tian-Shi; Liu, Hui-Zhou
2006-03-01
Nattokinase is a novel fibrinolytic enzyme that is considered to be a promising agent for thrombosis therapy. In this study, reverse micelles extraction was applied to purify and concentrate nattokinase from fermentation broth. The effects of temperature and phase volume ratio used for the forward and backward extraction on the extraction process were examined. The optimal temperature for forward and backward extraction were 25 degrees C and 35 degrees C respectively. Nattokinase became more thermosensitive during reverse micelles extraction. And it could be enriched in the stripping phase eight times during backward extraction. It was found that nattokinase could be purified by AOT reverse micelles with up to 80% activity recovery and with a purification factor of 3.9.
Development of Solvent Extraction Approach to Recycle Enriched Molybdenum Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tkac, Peter; Brown, M. Alex; Sen, Sujat
2016-06-01
Argonne National Laboratory, in cooperation with Oak Ridge National Laboratory and NorthStar Medical Technologies, LLC, is developing a recycling process for a solution containing valuable Mo-100 or Mo-98 enriched material. Previously, Argonne had developed a recycle process using a precipitation technique. However, this process is labor intensive and can lead to production of large volumes of highly corrosive waste. This report discusses an alternative process to recover enriched Mo in the form of ammonium heptamolybdate by using solvent extraction. Small-scale experiments determined the optimal conditions for effective extraction of high Mo concentrations. Methods were developed for removal of ammonium chloridemore » from the molybdenum product of the solvent extraction process. In large-scale experiments, very good purification from potassium and other elements was observed with very high recovery yields (~98%).« less
Gjoka, Xhorxhi; Gantier, Rene; Schofield, Mark
2017-01-20
The goal of this study was to adapt a batch mAb purification chromatography platform for continuous operation. The experiments and rationale used to convert from batch to continuous operation are described. Experimental data was used to design chromatography methods for continuous operation that would exceed the threshold for critical quality attributes and minimize the consumables required as compared to batch mode of operation. Four unit operations comprising of Protein A capture, viral inactivation, flow-through anion exchange (AEX), and mixed-mode cation exchange chromatography (MMCEX) were integrated across two Cadence BioSMB PD multi-column chromatography systems in order to process a 25L volume of harvested cell culture fluid (HCCF) in less than 12h. Transfer from batch to continuous resulted in an increase in productivity of the Protein A step from 13 to 50g/L/h and of the MMCEX step from 10 to 60g/L/h with no impact on the purification process performance in term of contaminant removal (4.5 log reduction of host cell proteins, 50% reduction in soluble product aggregates) and overall chromatography process yield of recovery (75%). The increase in productivity, combined with continuous operation, reduced the resin volume required for Protein A and MMCEX chromatography by more than 95% compared to batch. The volume of AEX membrane required for flow through operation was reduced by 74%. Moreover, the continuous process required 44% less buffer than an equivalent batch process. This significant reduction in consumables enables cost-effective, disposable, single-use manufacturing. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Software For Nearly Optimal Packing Of Cargo
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Daughtrey, Rodney S.; Schwaab, Doug G.
1994-01-01
PACKMAN computer program used to find nearly optimal arrangements of cargo items in storage containers, subject to such multiple packing objectives as utilization of volumes of containers, utilization of containers up to limits on weights, and other considerations. Automatic packing algorithm employed attempts to find best positioning of cargo items in container, such that volume and weight capacity of container both utilized to maximum extent possible. Written in Common LISP.
Optimal and fast rotational alignment of volumes with missing data in Fourier space.
Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E
2013-11-01
Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Salahinejad, Maryam; Aflaki, Fereydoon
2011-06-01
Dispersive liquid-liquid microextraction followed by inductively coupled plasma-optical emission spectrometry has been investigated for determination of Cd(II) ions in water samples. Ammonium pyrrolidine dithiocarbamate was used as chelating agent. Several factors influencing the microextraction efficiency of Cd (II) ions such as extracting and dispersing solvent type and their volumes, pH, sample volume, and salting effect were optimized. The optimization was performed both via one variable at a time, and central composite design methods and the optimum conditions were selected. Both optimization methods showed nearly the same results: sample size 5 mL; dispersive solvent ethanol; dispersive solvent volume 2 mL; extracting solvent chloroform; extracting solvent volume 200 [Formula: see text]L; pH and salt amount do not affect significantly the microextraction efficiency. The limits of detection and quantification were 0.8 and 2.5 ng L( - 1), respectively. The relative standard deviation for five replicate measurements of 0.50 mg L( - 1) of Cd (II) was 4.4%. The recoveries for the spiked real samples from tap, mineral, river, dam, and sea waters samples ranged from 92.2% to 104.5%.
A Nonlinear Model for Fuel Atomization in Spray Combustion
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave
2003-01-01
Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.
NASA Astrophysics Data System (ADS)
Sun, Dongya; Gao, Yifan; Hou, Dianxun; Zuo, Kuichang; Chen, Xi; Liang, Peng; Zhang, Xiaoyuan; Ren, Zhiyong Jason; Huang, Xia
2018-04-01
Recovery of nutrient resources from the wastewater is now an inevitable strategy to maintain the supply of both nutrient and water for our huge population. While the intensive energy consumption in conventional nutrient recovery technologies still remained as the bottleneck towards the sustainable nutrient recycle. This study proposed an enlarged microbial nutrient recovery cell (EMNRC) which was powered by the energy contained in wastewater and achieved multi-cycle nutrient recovery incorporated with in situ wastewater treatment. With the optimal recovery solution of 3 g/L NaCl and the optimal volume ratio of wastewater to recovery solution of 10:1, >89% of phosphorus and >62% of ammonium nitrogen were recovered into struvite. An extremely low water input ratio of <1% was required to obtain the recovered fertilizer and the purified water. It was proved the EMNRC system was a promising technology which could utilize the chemical energy contained in wastewater itself and energy-neutrally recover nutrient during the continuous wastewater purification process.
Ethnic and Gender Considerations in the Use of Facial Injectables: Asian Patients.
Liew, Steven
2015-11-01
Asians have distinct facial characteristics due to underlying skeletal and morphological features that differ greatly with those of whites. This together with the higher sun protection factor and the differences in the quality of the skin and soft tissue create a profound effect on their aging process. Understanding of these differences and their effects in the aging process in Asians is crucial in determining effective utilization and placement of injectable products to ensure optimal aesthetic outcomes. For younger Asian women, the main treatment goal is to address the inherent structural deficits through reshaping and the provision of facial support. Facial injectables are used to provide anterior projection, to reduce facial width, and to lengthen facial height. In the older group, the aim is for rejuvenation and also to address the underlying structural issues that has compounded due to age-related volume loss. Asian women requesting cosmetic procedures do not want to be Westernized but rather seeking to enhance and optimize their Asian ethnic features.
Microfluidic Separation of Ethylene and Ethane Using Frustrated Lewis Pairs.
Voicu, Dan; Stephan, Douglas W; Kumacheva, Eugenia
2015-12-21
Separation of gaseous olefins and paraffins is one of the most important separation processes in the industry. Development of new cost-effective technologies aims at reducing the high energy consumption during the separation process. Here, we took advantage of the reaction of frustrated Lewis pairs (FLPs) with ethylene to achieve reactive extraction of ethylene from ethylene-ethane mixtures. The extraction was studied using a microfluidic platform, which enabled a rapid, high-throughput assessment of reaction conditions to optimize gas separation efficiency. A separation factor of 7.3 was achieved for ethylene from a 1:1 volume ratio mixture of ethylene and ethane, which corresponded to an extracted ethylene purity of 88 %. The results obtained in the microfluidic studies were validated using infrared spectroscopy. This work paves the way for further development of the FLPs and optimization of reaction conditions, thereby maximizing the separation efficiency of olefins from their mixtures with paraffins. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Coupling of RF antennas to large volume helicon plasma
NASA Astrophysics Data System (ADS)
Chang, Lei; Hu, Xinyue; Gao, Lei; Chen, Wei; Wu, Xianming; Sun, Xinfeng; Hu, Ning; Huang, Chongxiang
2018-04-01
Large volume helicon plasma sources are of particular interest for large scale semiconductor processing, high power plasma propulsion and recently plasma-material interaction under fusion conditions. This work is devoted to studying the coupling of four typical RF antennas to helicon plasma with infinite length and diameter of 0.5 m, and exploring its frequency dependence in the range of 13.56-70 MHz for coupling optimization. It is found that loop antenna is more efficient than half helix, Boswell and Nagoya III antennas for power absorption; radially parabolic density profile overwhelms Gaussian density profile in terms of antenna coupling for low-density plasma, but the superiority reverses for high-density plasma. Increasing the driving frequency results in power absorption more near plasma edge, but the overall power absorption increases with frequency. Perpendicular stream plots of wave magnetic field, wave electric field and perturbed current are also presented. This work can serve as an important reference for the experimental design of large volume helicon plasma source with high RF power.
Rapid Airplane Parametric Input Design(RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.
2004-01-01
An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.
Optimal Observations for Variational Data Assimilation
NASA Technical Reports Server (NTRS)
Koehl, Armin; Stammer, Detlef
2003-01-01
An important aspect of Ocean state estimation is the design of an observing system that allows the efficient study of climate aspects in the ocean. A solution of the design problem is presented here in terms of optimal observations that emerge as nondimensionalized singular vectors of the modified data resolution matrix. The actual computation is feasible only for scalar quantities in the limit of large observational errors. In the framework of a lo resolution North Atlantic primitive equation model it is demonstrated that such optimal observations when applied to determining the strength of the volume and heat transport across the Greenland-Scotland ridge, perform significantly better than traditional section data. On seasonal to inter-annual time-scales optimal observations are located primarily along the continental shelf and information about heat-transport, wind stress and stratification is being communicated via boundary waves and advective processes. On time-scales of about a month, sea surface height observations appear to be more efficient in reconstructing the cross-ridge heat transport than hydrographic observations. Optimal observations also provide a tool for understanding how the ocean state is effected by anomalies of integral quantities such as meridional heat transport.
Optimization of uncatalyzed steam explosion pretreatment of rapeseed straw for biofuel production.
López-Linares, Juan C; Ballesteros, Ignacio; Tourán, Josefina; Cara, Cristóbal; Castro, Eulogio; Ballesteros, Mercedes; Romero, Inmaculada
2015-08-01
Rapeseed straw constitutes an agricultural residue with great potential as feedstock for ethanol production. In this work, uncatalyzed steam explosion was carried out as a pretreatment to increase the enzymatic digestibility of rapeseed straw. Experimental statistical design and response surface methodology were used to evaluate the influence of the temperature (185-215°C) and the process time (2.5-7.5min). According to the rotatable central composite design applied, 215°C and 7.5min were confirmed to be the optimal conditions, considering the maximization of enzymatic hydrolysis yield as optimization criterion. These conditions led to a maximum yield of 72.3%, equivalent to 81% of potential glucose in pretreated solid. Different configurations for bioethanol production from steam exploded rapeseed straw were investigated using the pretreated solid obtained under optimal conditions as a substrate. As a relevant result, concentrations of ethanol as high as 43.6g/L (5.5% by volume) were obtained as a consequence of using 20% (w/v) solid loading, equivalent to 12.4g ethanol/100g biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nalichowski, Adrian; Burmeister, Jay
2013-07-01
To compare optimization characteristics, plan quality, and treatment delivery efficiency between total marrow irradiation (TMI) plans using the new TomoTherapy graphic processing unit (GPU) based dose engine and CPU/cluster based dose engine. Five TMI plans created on an anthropomorphic phantom were optimized and calculated with both dose engines. The planning treatment volume (PTV) included all the bones from head to mid femur except for upper extremities. Evaluated organs at risk (OAR) consisted of lung, liver, heart, kidneys, and brain. The following treatment parameters were used to generate the TMI plans: field widths of 2.5 and 5 cm, modulation factors of 2 and 2.5, and pitch of either 0.287 or 0.43. The optimization parameters were chosen based on the PTV and OAR priorities and the plans were optimized with a fixed number of iterations. The PTV constraint was selected to ensure that at least 95% of the PTV received the prescription dose. The plans were evaluated based on D80 and D50 (dose to 80% and 50% of the OAR volume, respectively) and hotspot volumes within the PTVs. Gamma indices (Γ) were also used to compare planar dose distributions between the two modalities. The optimization and dose calculation times were compared between the two systems. The treatment delivery times were also evaluated. The results showed very good dosimetric agreement between the GPU and CPU calculated plans for any of the evaluated planning parameters indicating that both systems converge on nearly identical plans. All D80 and D50 parameters varied by less than 3% of the prescription dose with an average difference of 0.8%. A gamma analysis Γ(3%, 3 mm) < 1 of the GPU plan resulted in over 90% of calculated voxels satisfying Γ < 1 criterion as compared to baseline CPU plan. The average number of voxels meeting the Γ < 1 criterion for all the plans was 97%. In terms of dose optimization/calculation efficiency, there was a 20-fold reduction in planning time with the new GPU system. The average optimization/dose calculation time utilizing the traditional CPU/cluster based system was 579 vs 26.8 min for the GPU based system. There was no difference in the calculated treatment delivery time per fraction. Beam-on time varied based on field width and pitch and ranged between 15 and 28 min. The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation.
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-21
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.
Zhu, Feifei; Zhang, Qinglin; Qiu, Jiang
2013-01-01
Creativity can be defined the capacity of an individual to produce something original and useful. An important measurable component of creativity is divergent thinking. Despite existing studies on creativity-related cerebral structural basis, no study has used a large sample to investigate the relationship between individual verbal creativity and regional gray matter volumes (GMVs) and white matter volumes (WMVs). In the present work, optimal voxel-based morphometry (VBM) was employed to identify the structure that correlates verbal creativity (measured by the verbal form of Torrance Tests of Creative Thinking) across the brain in young healthy subjects. Verbal creativity was found to be significantly positively correlated with regional GMV in the left inferior frontal gyrus (IFG), which is believed to be responsible for language production and comprehension, new semantic representation, and memory retrieval, and in the right IFG, which may involve inhibitory control and attention switching. A relationship between verbal creativity and regional WMV in the left and right IFG was also observed. Overall, a highly verbal creative individual with superior verbal skills may demonstrate a greater computational efficiency in the brain areas involved in high-level cognitive processes including language production, semantic representation and cognitive control. PMID:24223921
Razmi, Rasoul; Shahpari, Behrouz; Pourbasheer, Eslam; Boustanifar, Mohammad Hasan; Azari, Zhila; Ebadi, Amin
2016-11-01
A rapid and simple method for the extraction and preconcentration of ceftazidime in aqueous samples has been developed using dispersive liquid-liquid microextraction followed by high-performance liquid chromatography analysis. The extraction parameters, such as the volume of extraction solvent and disperser solvent, salt effect, sample volume, centrifuge rate, centrifuge time, extraction time, and temperature in the dispersive liquid-liquid microextraction process, were studied and optimized with the experimental design methods. Firstly, for the preliminary screening of the parameters the taguchi design was used and then, the fractional factorial design was used for significant factors optimization. At the optimum conditions, the calibration curves for ceftazidime indicated good linearity over the range of 0.001-10 μg/mL with correlation coefficients higher than the 0.98, and the limits of detection were 0.13 and 0.17 ng/mL, for water and urine samples, respectively. The proposed method successfully employed to determine ceftazidime in water and urine samples and good agreement between the experimental data and predictive values has been achieved. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhu, Feifei; Zhang, Qinglin; Qiu, Jiang
2013-01-01
Creativity can be defined the capacity of an individual to produce something original and useful. An important measurable component of creativity is divergent thinking. Despite existing studies on creativity-related cerebral structural basis, no study has used a large sample to investigate the relationship between individual verbal creativity and regional gray matter volumes (GMVs) and white matter volumes (WMVs). In the present work, optimal voxel-based morphometry (VBM) was employed to identify the structure that correlates verbal creativity (measured by the verbal form of Torrance Tests of Creative Thinking) across the brain in young healthy subjects. Verbal creativity was found to be significantly positively correlated with regional GMV in the left inferior frontal gyrus (IFG), which is believed to be responsible for language production and comprehension, new semantic representation, and memory retrieval, and in the right IFG, which may involve inhibitory control and attention switching. A relationship between verbal creativity and regional WMV in the left and right IFG was also observed. Overall, a highly verbal creative individual with superior verbal skills may demonstrate a greater computational efficiency in the brain areas involved in high-level cognitive processes including language production, semantic representation and cognitive control.
NASA Astrophysics Data System (ADS)
Gutiérrez, J. M.; Natxiondo, A.; Nieves, J.; Zabala, A.; Sertucha, J.
2017-04-01
The study of shrinkage incidence variations in nodular cast irons is an important aspect of manufacturing processes. These variations change the feeding requirements on castings and the optimization of risers' size is consequently affected when avoiding the formation of shrinkage defects. The effect of a number of processing variables on the shrinkage size has been studied using a layout specifically designed for this purpose. The β parameter has been defined as the relative volume reduction from the pouring temperature up to the room temperature. It is observed that shrinkage size and β decrease as effective carbon content increases and when inoculant is added in the pouring stream. A similar effect is found when the parameters selected from cooling curves show high graphite nucleation during solidification of cast irons for a given inoculation level. Pearson statistical analysis has been used to analyze the correlations among all involved variables and a group of Bayesian networks have been subsequently built so as to get the best accurate model for predicting β as a function of the input processing variables. The developed models can be used in foundry plants to study the shrinkage incidence variations in the manufacturing process and to optimize the related costs.
Optimization of HPV DNA detection in urine by improving collection, storage, and extraction.
Vorsters, A; Van den Bergh, J; Micalessi, I; Biesmans, S; Bogers, J; Hens, A; De Coster, I; Ieven, M; Van Damme, P
2014-11-01
The benefits of using urine for the detection of human papillomavirus (HPV) DNA have been evaluated in disease surveillance, epidemiological studies, and screening for cervical cancers in specific subgroups. HPV DNA testing in urine is being considered for important purposes, notably the monitoring of HPV vaccination in adolescent girls and young women who do not wish to have a vaginal examination. The need to optimize and standardize sampling, storage, and processing has been reported.In this paper, we examined the impact of a DNA-conservation buffer, the extraction method, and urine sampling on the detection of HPV DNA and human DNA in urine provided by 44 women with a cytologically normal but HPV DNA-positive cervical sample. Ten women provided first-void and midstream urine samples. DNA analysis was performed using real-time PCR to allow quantification of HPV and human DNA.The results showed that an optimized method for HPV DNA detection in urine should (a) prevent DNA degradation during extraction and storage, (b) recover cell-free HPV DNA in addition to cell-associated DNA, (c) process a sufficient volume of urine, and (d) use a first-void sample.In addition, we found that detectable human DNA in urine may not be a good internal control for sample validity. HPV prevalence data that are based on urine samples collected, stored, and/or processed under suboptimal conditions may underestimate infection rates.
Gong, Chenhao; Zhang, Zhongguo; Li, Haitao; Li, Duo; Wu, Baichun; Sun, Yuwei; Cheng, Yanjun
2014-06-15
The electrocoagulation (EC) process was used to pretreat wastewater from the manufacture of wet-spun acrylic fibers, and the effects of varying the operating parameters, including the electrode area/wastewater volume (A/V) ratio, current density, interelectrode distance and pH, on the EC treatment process were investigated. About 44% of the total organic carbon was removed using the optimal conditions in a 100 min procedure. The optimal conditions were a current density of 35.7 mA cm(-2), an A/V ratio of 0.28 cm(-1), a pH of 5, and an interelectrode distance of 0.8 cm. The biodegradability of the contaminants in the treated water was improved by the EC treatment (using the optimal conditions), increasing the five-day biological oxygen demand/chemical oxygen demand ratio to 0.35, which could improve the effectiveness of subsequent biological treatments. The improvement in the biodegradability of the contaminants in the wastewater was attributed to the removal and degradation of aromatic organic compounds, straight-chain paraffins, and other organic compounds, which we identified using gas chromatography-mass spectrometry and Fourier transform infrared spectroscopy. The EC process was proven to be an effective alternative pretreatment for wastewater from the manufacture of wet-spun acrylic fibers, prior to biological treatments. Copyright © 2014 Elsevier B.V. All rights reserved.
Using price-volume agreements to manage pharmaceutical leakage and off-label promotion.
Zhang, Hui; Zaric, Gregory S
2015-09-01
Unapproved or "off-label" uses of prescription drugs are quite common. The extent of this use may be influenced by the promotional efforts of manufacturers. This paper investigates how a manufacturer makes promotional decisions in the presence of a price-volume agreement. We developed an optimization model in which the manufacturer maximizes its expected profit by choosing the level of marketing effort to promote uses for different indications. We considered several ways a volume threshold is determined. We also compared models in which off-label uses are reimbursed and those in which they are forbidden to illustrate the impact of off-label promotion on the optimal decisions and on the decision maker's performance. We found that the payer chooses a threshold which may be the same as the manufacturer's optimal decision. We also found that the manufacturer not only considers the promotional cost in promoting off-label uses but also considers the health benefit of off-label uses. In some situations, using a price-volume agreement to control leakage may be a better idea than simply preventing leakage without using the agreement, from a social welfare perspective.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
Design of transportation and distribution Oil Palm Trunk of (OPT) in Indonesia
NASA Astrophysics Data System (ADS)
Norita, Defi; Arkeman, Yandra
2018-03-01
This research initiated from the area of oil palm plantations in Indonesia 13 million hectares, triggering consternation of abundance of oil palm trunk when garden regeneration is done. If 4 percent of the area is rehabilitated every year, almost 100 million cubic feet of oil palm will be trash. Biomass in the form of pellets can be processed from oil palm trunk. It is then disseminated back to the palm oil processing area into biomass. The amount of transportation cost of the used ships and trucks was defined as parameters. So the objective function determined the type and number of ship and truck trips that provide the minimum transportation cost. To optimize logistics transportation network in regional port cluster, combining hub-and-spoke transportation system among regional port with consolidation and dispersing transportation systems between ports and their own hinterlands, a nonlinear optimization model for two-stage logistics system in regional port cluster was introduced to simultaneously determine the following factors: the hinterlands serviced by individual ports and transportation capacity operated between each port and its hinterland, cargo transportation volume and corresponding transportation capacity allocated via a hub port from an original port to a destination port, cargo transportation volume and corresponding transportation capacity allocated directly from an original port to a destination port. Finally, a numerical example is given to demonstrate the application of the proposed model. It can be shown that the solution to the proposed non-linear model can be obtained by transforming it into linear programming models.
D'Elia, Marta; Perego, Mauro; Bochev, Pavel B.; ...
2015-12-21
We develop and analyze an optimization-based method for the coupling of nonlocal and local diffusion problems with mixed volume constraints and boundary conditions. The approach formulates the coupling as a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the nonlocal and local domains, and the controls are virtual volume constraints and boundary conditions. When some assumptions on the kernel functions hold, we prove that the resulting optimization problem is well-posed and discuss its implementation using Sandia’s agile software components toolkit. As a result,more » the latter provides the groundwork for the development of engineering analysis tools, while numerical results for nonlocal diffusion in three-dimensions illustrate key properties of the optimization-based coupling method.« less
NASA Astrophysics Data System (ADS)
Han, Maeum; Keon Kim, Jae; Kong, Seong Ho; Kang, Shin-Won; Jung, Daewoong
2018-06-01
This paper reports a micro-electro-mechanical-system (MEMS)-based tilt sensor using air medium. Since the working mechanism of the sensor is the thermal convection in a sealed chamber, structural parameters that can affect thermal convection must be considered to optimize the performance of the sensor. This paper presents the experimental results that were conducted by optimizing several parameters such as the heater geometry, input power and cavity volume. We observed that an increase in the heating power and cavity volume can improve the sensitivity, and heater geometry plays important role in performance of the sensor.
Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual
NASA Technical Reports Server (NTRS)
Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.
1975-01-01
Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.
Minimizing finite-volume discretization errors on polyhedral meshes
NASA Astrophysics Data System (ADS)
Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian
2017-11-01
Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Shen, J; Stoker, J
2015-06-15
Purpose: To compare the impact of interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans to treat lung cancer. Methods: Two IMPT plans were created for 11 non-small-cell-lung-cancer cases with 6–14 mm spots. 3D robust optimization generated plans on average CTs with the internal gross tumor volume density overridden to deliver 66 CGyE in 33 fractions to the internal target volume (ITV). 4D robust optimization generated plans on 4D CTs with the delivery of prescribed dose to the clinical target volume (CTV). In 4D optimization, the CTV of individual 4D CT phases received non-uniform doses tomore » achieve a uniform cumulative dose. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Indices from dose-volume histograms were used to compare target coverage, dose homogeneity, and normal-tissue sparing. DVH indices were compared using Wilcoxon test. Results: Given the presence of interplay effect, 4D robust optimization produced IMPT plans with better target coverage and homogeneity, but slightly worse normal tissue sparing compared to 3D robust optimization (unit: Gy) [D95% ITV: 63.5 vs 62.0 (p=0.014), D5% - D95% ITV: 6.2 vs 7.3 (p=0.37), D1% spinal cord: 29.0 vs 29.5 (p=0.52), Dmean total lung: 14.8 vs 14.5 (p=0.12), D33% esophagus: 33.6 vs 33.1 (p=0.28)]. The improvement of target coverage (D95%,4D – D95%,3D) was related to the ratio RMA3/(TVx10−4), with RMA and TV being respiratory motion amplitude (RMA) and tumor volume (TV), respectively. Peak benefit was observed at ratios between 2 and 10. This corresponds to 125 – 625 cm3 TV with 0.5-cm RMA. Conclusion: 4D optimization produced more interplay-effect-resistant plans compared to 3D optimization. It is most effective when respiratory motion is modest compared to TV. NIH/NCI K25CA168984; Eagles Cancer Research Career Development; The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research; Mayo ASU Seed Grant; The Kemper Marley Foundation.« less
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Tailoring rice flour structure by rubbery milling for improved gluten-free baked goods.
Brütsch, Linda; Tribolet, Liliane; Isabettini, Stéphane; Soltermann, Patrick; Baumann, Andreas; Windhab, Erich J
2018-05-10
Ever-growing demand for gluten-free products calls for the development of novel food processing techniques to widen the range of existing baked goods. Extensive research has been targeted towards recipe optimization, widely neglecting the tailoring potential of process-induced structuring of gluten-free raw materials. Herein, we address this shortcoming by demonstrating the potential of rubbery milling for the generation of structure and techno-functionality in breads obtained from a variety of rice flour types. Moisture and temperature induced state transitions during milling were exploited to tailor the physicochemical properties of the flour. Moisture addition during conditioning of the different rice varieties and milling in the rubbery state considerably decreased starch damage due to more gentle disintegration. The degree of starch damage dictated the water absorption capacity of the rice flour types. Flour types with reduced starch damage upon milling offered lower dough densities, yielding bread loafs with a higher volume and better appearance. The choice of rice variety enables fine-tuning of the final product quality by influencing the dough viscoelasticity, which defines the final loaf volume. Whole grain rice flour dramatically increased the loaf volume, whilst simultaneously offering nutritional benefits. Combining the proposed functionalised flour types with current and future advances in product recipes paves the way towards optimised gluten-free goods.
Design of a superconducting volume coil for magnetic resonance microscopy of the mouse brain
NASA Astrophysics Data System (ADS)
Nouls, John C.; Izenson, Michael G.; Greeley, Harold P.; Johnson, G. Allan
2008-04-01
We present the design process of a superconducting volume coil for magnetic resonance microscopy of the mouse brain at 9.4 T. The yttrium barium copper oxide coil has been designed through an iterative process of three-dimensional finite-element simulations and validation against room temperature copper coils. Compared to previous designs, the Helmholtz pair provides substantially higher B1 homogeneity over an extended volume of interest sufficiently large to image biologically relevant specimens. A custom-built cryogenic cooling system maintains the superconducting probe at 60 ± 0.1 K. Specimen loading and probe retuning can be carried out interactively with the coil at operating temperature, enabling much higher through-put. The operation of the probe is a routine, consistent procedure. Signal-to-noise ratio in a mouse brain increased by a factor ranging from 1.1 to 2.9 as compared to a room-temperature solenoid coil optimized for mouse brain microscopy. We demonstrate images encoded at 10 × 10 × 20 μm for an entire mouse brain specimen with signal-to-noise ratio of 18 and a total acquisition time of 16.5 h, revealing neuroanatomy unseen at lower resolution. Phantom measurements show an effective spatial resolution better than 20 μm.
Krusong, W; Tantratian, S
2014-11-01
To maximize acetification rate (ETA) by adsorption of acetic acid bacteria (AAB) on loofa sponge matrices (LSM). AAB were adsorbed on LSM, and the optimal shaking rate was determined for maximized AAB growth and oxygen availability. Results confirm that the 1 Hz reciprocating shaking rate with 40% working volume (liquid volume 24 l, tank volume 60 l) achieved a high oxygen transfer coefficient (k(L)a). The highest ETA was obtained at 50% (w:v) LSM-AAB:culture medium at 30 ± 2°C (P ≤ 0·05). To test process consistency, nine sequential acetification cycles were run using LSM-AAB and comparing it with no LSM. The highest ETA (1·701-2·401 g l(-1) d(-1)) was with LSM-AAB and was associated with the highest biomass of AAB, confirmed by SEM images. Results confirm that LSM-AAB works well as an inert substrate for AAB. High oxygenation was maintained by a reciprocating shaker. Both shaking and LSM were important in increasing ETA. High cell biomass in LSM-AAB provides good conditions for higher ETAs of quick acetification under adequate oxygen transfer by reciprocating shaker. It is a sustainable process for small-scale vinegar production system requiring minimal set-up cost. © 2014 The Society for Applied Microbiology.
Design of a superconducting volume coil for magnetic resonance microscopy of the mouse brain.
Nouls, John C; Izenson, Michael G; Greeley, Harold P; Johnson, G Allan
2008-04-01
We present the design process of a superconducting volume coil for magnetic resonance microscopy of the mouse brain at 9.4T. The yttrium barium copper oxide coil has been designed through an iterative process of three-dimensional finite-element simulations and validation against room temperature copper coils. Compared to previous designs, the Helmholtz pair provides substantially higher B(1) homogeneity over an extended volume of interest sufficiently large to image biologically relevant specimens. A custom-built cryogenic cooling system maintains the superconducting probe at 60+/-0.1K. Specimen loading and probe retuning can be carried out interactively with the coil at operating temperature, enabling much higher through-put. The operation of the probe is a routine, consistent procedure. Signal-to-noise ratio in a mouse brain increased by a factor ranging from 1.1 to 2.9 as compared to a room-temperature solenoid coil optimized for mouse brain microscopy. We demonstrate images encoded at 10x10x20mum for an entire mouse brain specimen with signal-to-noise ratio of 18 and a total acquisition time of 16.5h, revealing neuroanatomy unseen at lower resolution. Phantom measurements show an effective spatial resolution better than 20mum.
Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi
2017-09-01
This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P < 0.05). Our study demonstrated that a robust optimization plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Optimization of polyphenol removal from kiwifruit juice using a macroporous resin.
Gao, Zhenpeng; Yu, Zhifang; Yue, Tianli; Quek, Siew Young
2017-06-01
The separation of polyphenols from kiwifruit juice is essential for enhancing sensory properties and prevent the browning reaction in juice during processing and storage. The present study investigated the dynamic adsorption and desorption of polyphenols in kiwifruit juice using AB-8 resin. The model obtained could be successfully applied to predict the experimental results of dynamic adsorption capacity (DAC) and dynamic desorption quantity (DDQ). The results showed that dynamic adsorption of polyphenols could be optimised in a juice concentration of 19 °Brix, with a feed flow-rate of 1.3 mL min -1 and a feed volume of 7 bed volume (BV). The optimum conditions for dynamic desorption of polyphenols from the AB-8 resin were an ethanol concentration of 43% (v/v), an elute flow-rate of 2.2 mL min -1 and an elute volume of 3 BV. The optimized DAC value was 3.16 g of polyphenols kg -1 resin, whereas that for DDQ was 917.5 g kg -1 , with both values being consistent with the predicted values generated by the regression models. The major polyphenols in the dynamic desorption solution consisted of seven compositions. The present study could be scaled-up using a continuous column system for industrial application, thus contributing to the improved flavor and color of kiwifruit juice. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Liu, Han; Wu, Qiuwen
2011-01-01
For prostate cancer patients, online image-guided (IG) radiotherapy has been widely used in clinic to correct the translational inter-fractional motion at each treatment fraction. For uncertainties that cannot be corrected online, such as rotation and deformation of the target volume, margins are still required to be added to the clinical target volume (CTV) for the treatment planning. Offline adaptive radiotherapy has been implemented to optimize the treatment for each individual patient based on the measurements at early stages of treatment process. It has been shown that offline adaptive radiotherapy can effectively reduce the required margin. Recently a hybrid strategy of offline adaptive replanning and online IG was proposed and the geometric evaluation was performed. It was found that the planning margins can be further reduced by 1–2 mm compared to online IG only strategy. The purpose of this study was to investigate the dosimetric benefits of such hybrid strategy on the target and organs at risk (OARs). A total of 420 repeated helical computed tomography (HCT) scans from 28 patients were included in the study. Both low-risk patients (LRP, CTV = prostate) and intermediate-risk patients (IRP, CTV = prostate + seminal vesicles, SV) were included in the simulation. Two registration methods, based on center-of-mass (COM) shift of prostate only and prostate plus SV, were performed for IRP. The intensity modulated radiotherapy (IMRT) was used in the simulation. Criteria on both cumulative dose and fractional doses were evaluated. Furthermore, the geometric evaluation was extended to investigate the optimal number of fractions necessary to construct the internal target volume (ITV) for the hybrid strategy. The dosimetric margin improvement was smaller than its geometric counterpart and was in the range of 0 mm to 1 mm. The optimal number of fractions necessary for the ITV construction is 2 for LRP and 3–4 for IRP in a hypofractionation protocol. A new cumulative index of target volume (CITV) was proposed for the evaluation of adaptive radiotherapy strategies, and it was found that it had the advantages over other indices in evaluating different adaptive radiotherapy strategies. PMID:21772083
Liu, Han; Wu, Qiuwen
2011-08-07
For prostate cancer patients, online image-guided (IG) radiotherapy has been widely used in clinic to correct the translational inter-fractional motion at each treatment fraction. For uncertainties that cannot be corrected online, such as rotation and deformation of the target volume, margins are still required to be added to the clinical target volume (CTV) for the treatment planning. Offline adaptive radiotherapy has been implemented to optimize the treatment for each individual patient based on the measurements at early stages of treatment process. It has been shown that offline adaptive radiotherapy can effectively reduce the required margin. Recently a hybrid strategy of offline adaptive replanning and online IG was proposed and the geometric evaluation was performed. It was found that the planning margins can further be reduced by 1-2 mm compared to online IG only strategy. The purpose of this study was to investigate the dosimetric benefits of such a hybrid strategy on the target and organs at risk. A total of 420 repeated helical computed tomography scans from 28 patients were included in the study. Both low-risk patients (LRP, CTV = prostate) and intermediate-risk patients (IRP, CTV = prostate + seminal vesicles, SV) were included in the simulation. Two registration methods, based on center-of-mass shift of prostate only and prostate plus SV, were performed for IRP. The intensity-modulated radiotherapy was used in the simulation. Criteria on both cumulative and fractional doses were evaluated. Furthermore, the geometric evaluation was extended to investigate the optimal number of fractions necessary to construct the internal target volume (ITV) for the hybrid strategy. The dosimetric margin improvement was smaller than its geometric counterpart and was in the range of 0-1 mm. The optimal number of fractions necessary for the ITV construction is 2 for LRPs and 3-4 for IRPs in a hypofractionation protocol. A new cumulative index of target volume was proposed for the evaluation of adaptive radiotherapy strategies, and it was found that it had the advantages over other indices in evaluating different adaptive radiotherapy strategies.
Choi, J W; Lee, J H; Moon, B S; Kannan, K
2008-08-01
The use of a large volume polyurethane foam (PUF) sampler was validated for rapid extraction of persistent organic pollutants (POPs), such as polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs), in raw water and treated water from drinking water plants. To validate the recovery of target compounds in the sampling process, a (37)Cl-labeled standard was spiked into the 1st PUF plug prior to filtration. An accelerated solvent extraction method, as a pressurized liquid extractor (PLE), was optimized to extract the PUF plug. For sample preparation, tandem column chromatography (TCC) clean-up was used for rapid analysis. The recoveries of labeled compounds in the analytical method were 80-110% (n = 9). The optimized PUF-PLE-TCC method was applied in the analysis of raw water and treated potable water from seven drinking water plants in South Korea. The sample volume used was between 18 and 102 L for raw water at a flow rate of 0.4-2 L min(-1), 95 and 107 L for treated water at a flow rate of 1.5-2.2 L min(-1). Limit of quantitation (LOQ) was a function of sample volume and it decreased with increasing sample volume. The LOQ of PCDD/Fs in raw waters analyzed by this method was 3-11 times lower than that described using large-size disk-type solid phase extraction (SPE) method. The LOQ of PCDD/F congeners in raw water and treated water were 0.022-3.9 ng L(-1) and 0.018-0.74 ng L(-1), respectively. Octachlorinated dibenzo-p-dioxin (OCDD) was found in some raw water samples, while their concentrations were well below the tentative criterion set by the Japanese Environmental Ministry for drinking water. OCDD was below the LOQ in the treated drinking water.
Khan, Eakalak; Khaodhir, Sutha; Ruangrote, Darin
2009-10-01
Heavy metals are common contaminants in stormwater runoff. One of the devices that can be used to effectively and economically remove heavy metals from runoff is a yard waste compost stormwater filter. The primary goal of composting is to reduce waste volume rather than to produce stormwater filter media. Moisture content (MC) and initial pH, the two important parameters in composting, were studied for their effects on yard waste volume reduction and heavy metal adsorption performances of the compost. The main objective of this investigation was to examine whether the conditions that provided high yard waste volume reduction would also result in compost with good heavy metal removal performances. Manila grass was composted at different initial pHs (5-9) and MCs (30-70%) and the composts were used to adsorb cadmium, copper, lead and zinc from water. Results indicated that MC is more critical than initial pH for both volume reduction and production of compost with high metal adsorption performances. The most optimal conditions for the two attributes were not exactly the same but lower MCs of 30-40% and pH 7 or higher tended to satisfy both high volume reduction and effective metal adsorption.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
Evolution of Mobil`s methods to evaluate exploration and producing opportunities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaynor, C.B.; Cook, D.M. Jr.
1996-08-01
Over the past decade, Mobil has changed significantly in size, structure and focus to improve profitability. Concurrently, work processes and methodologies have been modified to improve resource utilization and opportunity selection. The key imperative has been recognition of the full range of hydrocarbon volume uncertainty, its risk and value. Exploration has focussed on increasing success through improved geotechnical estimates and demonstrating value addition. For Producing, the important tasks: (1) A centralized Exploration and Producing team was formed to help ensure an integrated, consistent worldwide approach to prospect and field assessments. Monte Carlo simulation was instituted to recognize probability-weighted ranges ofmore » possible outcomes for prospects and fields, and hydrocarbon volume category definitions were standardized. (2) Exploration instituted a global Prospect Inventory, tracking wildcat predictions vs. results. Performance analyses led to initiatives to improve the quality and consistency of assessments. Process improvement efforts included the use of multidisciplinary teams and peer reviews. Continued overestimates of hydrocarbon volumes prompted methodology changes such as the use of {open_quotes}reality checks{close_quotes} and log-normal distributions. The communication of value predictions and additions became paramount. (3) Producing now recognizes the need for Exploration`s commercial discoveries and new Producing ventures, notwithstanding the associated risk. Multi-disciplinary teams of engineers and geoscientists work on post-discovery assessments to optimize field development and maximize the value of opportunities. Mobil now integrates volume and risk assessment with correlative future capital investment programs to make proactive strategic choices to maximize shareholder value.« less
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Pérez, Rafael
2017-04-01
The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005
Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe
2015-01-01
We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Implementing optimal thinning strategies
Kurt H. Riitters; J. Douglas Brodie
1984-01-01
Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....
Chapdelaine, Isabelle; Nubé, Menso J; Blankestijn, Peter J; Bots, Michiel L; Konings, Constantijn J A M; Kremer Hovinga, Ton K; Molenaar, Femke M; van der Weerd, Neelke C; Grooteman, Muriel P C
2017-01-01
Abstract Background. Available evidence suggests a reduced mortality risk for patients treated with high-volume postdilution hemodiafiltration (HDF) when compared with hemodialysis (HD) patients. As the magnitude of the convection volume depends on treatment-related factors rather than patient-related characteristics, we prospectively investigated whether a high convection volume (defined as ≥22 L/session) is feasible in the majority of patients (>75%). Methods. A multicenter study was performed in adult prevalent dialysis patients. Nonparticipating eligible patients formed the control group. Using a stepwise protocol, treatment time (up to 4 hours), blood flow rate (up to 400 mL/min) and filtration fraction (up to 33%) were optimized as much as possible. The convection volume was determined at the end of this optimization phase and at 4 and 8 weeks thereafter. Results. Baseline characteristics were comparable in participants (n = 86) and controls (n = 58). At the end of the optimization and 8 weeks thereafter, 71/86 (83%) and 66/83 (80%) of the patients achieved high-volume HDF (mean 25.5 ± 3.6 and 26.0 ± 3.4 L/session, respectively). While treatment time remained unaltered, mean blood flow rate increased by 27% and filtration fraction increased by 23%. Patients with <22 L/session had a higher percentage of central venous catheters (CVCs), a shorter treatment time and lower blood flow rate when compared with patients with ≥22 L/session. Conclusions. High-volume HDF is feasible in a clear majority of dialysis patients. Since none of the patients agreed to increase treatment time, these findings indicate that high-volume HDF is feasible just by increasing blood flow rate and filtration fraction. PMID:29225810
Sereshti, Hassan; Samadi, Soheila; Jalali-Heravi, Mehdi
2013-03-08
Ultrasound assisted extraction (UAE) followed by dispersive liquid-liquid microextraction (DLLME) was used for extraction and preconcentration of volatile constituents of six tea plants. The preconcentrated compounds were analyzed by gas chromatography-mass spectrometry (GC-MS). Totally, 42 compounds were identified and caffeine was quantitatively determined. The main parameters (factors) of the extraction process were optimized by using a central composite design (CCD). Methanol and chloroform were selected as the extraction solvent and preconcentration solvent, respectively .The optimal conditions were obtained as 21 in for sonication time; 32°C for temperature; 27 L for volume of extraction solvent and 7.4% for salt concentration (NaCl/H(2)O). The determination coefficient (R(2)) was 0.9988. The relative standard deviation (RSD %) was 4.8 (n=5), and the enhancement factors (EFs) were 4.0-42.6. Copyright © 2013 Elsevier B.V. All rights reserved.
Trapote, Arturo; Jover, Margarita; Cartagena, Pablo; El Kaddouri, Marouane; Prats, Daniel
2014-08-01
This article describes an effective procedure for reducing the water content of excess sludge production from a wastewater treatment plant by increasing its concentration and, as a consequence, minimizing the volume of sludge to be managed. It consists of a pre-dewatering sludge process, which is used as a preliminary step or alternative to the thickening. It is made up of two discontinuous sequential stages: the first is resettling and the second, filtration through a porous medium. The process is strictly physical, without any chemical additives or electromechanical equipment intervening. The experiment was carried out in a pilot-scale system, consisting of a column of sedimentation that incorporates a filter medium. Different sludge heights were tested over the filter to verify the influence ofhydrostatic pressure on the various final concentrations of each stage. The results show that the initial sludge concentration may increase by more than 570% by the end of the process with the final volume of sludge being reduced in similar proportions and hydrostatic pressure having a limited effect on this final concentration. Moreover, the value of the hydrostatic pressure at which critical specific cake resistance is reached is established.
Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique
2016-01-01
High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719
Vapor deposition on doublet airfoil substrates: Control of coating thickness and microstructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Theron M.; Zhao, Hengbei; Wadley, Haydn N. G., E-mail: haydn@virginia.edu
Gas jet assisted vapor deposition processes for depositing coatings are conducted at higher pressures than conventional physical vapor deposition methods, and have shown promise for coating complex shaped substrates including those with non-line-of-sight (NLS) regions on their surface. These regions typically receive vapor atoms at a lower rate and with a wider incident angular distribution than substrate regions in line-of-sight (LS) of the vapor source. To investigate the coating of such substrates, the thickness and microstructure variation along the inner (curved) surfaces of a model doublet airfoil containing both LS and NLS regions has been investigated. Results from atomistic simulationsmore » and experiments confirm that the coating's thickness is thinner in flux-shadowed regions than in other regions for all the coating processes investigated. They also indicated that the coatings columnar microstructure and pore volume fraction vary with surface location through the LS to NLS transition zone. A substrate rotation strategy for optimizing the thickness over the entire doublet airfoil surface was investigated, and led to the identification of a process that resulted in only small variation of coating thickness, columnar growth angle, and pore volume fraction on all doublet airfoil surfaces.« less
Michalski, M C; Leconte, N; Briard-Bion, V; Fauquant, J; Maubois, J L; Goudédranche, H
2006-10-01
We present an extensive description and analysis of a microfiltration process patented in our laboratory to separate different fractions of the initial milk fat globule population according to the size of the native milk fat globules (MFG). We used nominal membrane pore sizes of 2 to 12 microm and a specially designed pilot rig. Using this process with whole milk [whose MFG have a volume mean diameter (d43) = 4.2 +/- 0.2 microm] and appropriate membrane pore size and hydrodynamic conditions, we collected 2 extremes of the initial milk fat globule distribution consisting of 1) a retentate containing large MFG of d43 = 5 to 7.5 microm (with up to 250 g/kg of fat, up to 35% of initial milk fat, and up to 10% of initial milk volume), and 2) a permeate containing small MFG of d43 = 0.9 to 3.3 microm (with up to 16 g/kg of fat, up to 30% of initial milk fat, and up to 83% of initial milk volume and devoid of somatic cells). We checked that the process did not mechanically damage the MFG by measuring their zeta-potential. This new microfiltration process, avoiding milk aging, appears to be more efficient than gravity separation in selecting native MFG of different sizes. As we summarize from previous and new results showing that the physico-chemical and technological properties of native milk fat globules vary according to their size, the use of different fat globule fractions appears to be advantageous regarding the quality of cheeses and can lead to new dairy products with adapted properties (sensory, functional, and perhaps nutritional).
Bhambure, R; Rathore, A S
2013-01-01
This article describes the development of a high-throughput process development (HTPD) platform for developing chromatography steps. An assessment of the platform as a tool for establishing the "characterization space" for an ion exchange chromatography step has been performed by using design of experiments. Case studies involving use of a biotech therapeutic, granulocyte colony-stimulating factor have been used to demonstrate the performance of the platform. We discuss the various challenges that arise when working at such small volumes along with the solutions that we propose to alleviate these challenges to make the HTPD data suitable for empirical modeling. Further, we have also validated the scalability of this platform by comparing the results from the HTPD platform (2 and 6 μL resin volumes) against those obtained at the traditional laboratory scale (resin volume, 0.5 mL). We find that after integration of the proposed correction factors, the HTPD platform is capable of performing the process optimization studies at 170-fold higher productivity. The platform is capable of providing semi-quantitative assessment of the effects of the various input parameters under consideration. We think that platform such as the one presented is an excellent tool for examining the "characterization space" and reducing the extensive experimentation at the traditional lab scale that is otherwise required for establishing the "design space." Thus, this platform will specifically aid in successful implementation of quality by design in biotech process development. This is especially significant in view of the constraints with respect to time and resources that the biopharma industry faces today. Copyright © 2013 American Institute of Chemical Engineers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lancaster, V.R.; Modlin, D.N.
1994-12-31
In this study, the authors present a method for design and characterization of flow cells developed for minimum flow volume and optimal dynamic response with a given central observation area. The dynamic response of a circular shaped dual ported flow cell was compared to that obtained from a flow cell whose optimized shape was determined using this method. In the optimized flow cell design, the flow rate at the nominal operating pressure increased by 50% whereas the flow cell volume was reduced by 70%. In addition, the dynamic response of the new flow cell was found to be 200% fastermore » than the circular flow cell. The fluid dynamic analysis included simple graphical techniques utilizing free stream vorticity functions and Hagen-Poiseuille relationships. The flow cell dynamic response was measured using a fluorescence detection system. The fluoresce in emission from a 400{micro}m spot located at the exit port was measured as a function of time after switching the input to the flow cell between fluorescent and non-fluorescent solutions. Analysis of results revealed the system could be reasonably characterized as a first order dynamic system. Although some evidence of second order behavior was also observed, it is reasonable to assume that a first order model will provide adequate predictive capability for many real world applications. Given a set of flow cell requirements, the methods presented in this study can be used to design and characterize flow cells with lower reagent consumption and reduced purging times. These improvements can be readily translated into reduced process times and/or lower usage of high cost reagents.« less
Wang, Shen-Ling; Qi, Hong; Ren, Ya-Tao; Chen, Qin; Ruan, Li-Ming
2018-05-01
Thermal therapy is a very promising method for cancer treatment, which can be combined with chemotherapy, radiotherapy and other programs for enhanced cancer treatment. In order to get a better effect of thermal therapy in clinical applications, optimal internal temperature distribution of the tissue embedded with gold nanoparticles (GNPs) for enhanced thermal therapy was investigated in present research. The Monte Carlo method was applied to calculate the heat generation of the tissue embedded with GNPs irradiated by continuous laser. To have a better insight into the physical problem of heat transfer in tissues, the two-energy equation was employed to calculate the temperature distribution of the tissue in the process of GNPs enhanced therapy. The Arrhenius equation was applied to evaluate the degree of permanent thermal damage. A parametric study was performed to investigate the influence factors on the tissue internal temperature distribution, such as incident light intensity, the GNPs volume fraction, the periodic heating and cooling time, and the incident light position. It was found that period heating and cooling strategy can effectively avoid overheating of skin surface and heat damage of healthy tissue. Lower GNPs volume fraction will be better for the heat source distribution. Furthermore, the ring heating strategy is superior to the central heating strategy in the treatment effect. All the analysis provides theoretical guidance for optimal temperature control of tissue embedded with GNP for enhanced thermal therapy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pilon, Alan Cesar; Carnevale Neto, Fausto; Freire, Rafael Teixeira; Cardoso, Patrícia; Carneiro, Renato Lajarim; Da Silva Bolzani, Vanderlan; Castro-Gamboa, Ian
2016-03-01
A major challenge in metabolomic studies is how to extract and analyze an entire metabolome. So far, no single method was able to clearly complete this task in an efficient and reproducible way. In this work we proposed a sequential strategy for the extraction and chromatographic separation of metabolites from leaves Jatropha gossypifolia using a design of experiments and partial least square model. The effect of 14 different solvents on extraction process was evaluated and an optimized separation condition on liquid chromatography was estimated considering mobile phase composition and analysis time. The initial conditions of extraction using methanol and separation in 30 min between 5 and 100% water/methanol (1:1 v/v) with 0.1% of acetic acid, 20 μL sample volume, 3.0 mL min(-1) flow rate and 25°C column temperature led to 107 chromatographic peaks. After the optimization strategy using i-propanol/chloroform (1:1 v/v) for extraction, linear gradient elution of 60 min between 5 and 100% water/(acetonitrile/methanol 68:32 v/v with 0.1% of acetic acid), 30 μL sample volume, 2.0 mL min(-1) flow rate, and 30°C column temperature, we detected 140 chromatographic peaks, 30.84% more peaks compared to initial method. This is a reliable strategy using a limited number of experiments for metabolomics protocols. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Data Processing Factory for the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan
2002-12-01
The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.
Laurin, Nancy; DeMoors, Anick; Frégeau, Chantal
2012-09-01
Direct amplification of STR loci from biological samples collected on FTA cards without prior DNA purification was evaluated using Identifiler Direct and PowerPlex 16 HS in conjunction with the use of a high throughput Applied Biosystems 3730 DNA Analyzer. In order to reduce the overall sample processing cost, reduced PCR volumes combined with various FTA disk sizes were tested. Optimized STR profiles were obtained using a 0.53 mm disk size in 10 μL PCR volume for both STR systems. These protocols proved effective in generating high quality profiles on the 3730 DNA Analyzer from both blood and buccal FTA samples. Reproducibility, concordance, robustness, sample stability and profile quality were assessed using a collection of blood and buccal samples on FTA cards from volunteer donors as well as from convicted offenders. The new developed protocols offer enhanced throughput capability and cost effectiveness without compromising the robustness and quality of the STR profiles obtained. These results support the use of these protocols for processing convicted offender samples submitted to the National DNA Data Bank of Canada. Similar protocols could be applied to the processing of casework reference samples or in paternity or family relationship testing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Mendes, Gilberto de Oliveira; da Silva, Nina Morena Rêgo Muniz; Anastácio, Thalita Cardoso; Vassilev, Nikolay Bojkov; Ribeiro, José Ivo; da Silva, Ivo Ribeiro; Costa, Maurício Dutra
2015-01-01
A biotechnological strategy for the production of an alternative P fertilizer is described in this work. The fertilizer was produced through rock phosphate (RP) solubilization by Aspergillus niger in a solid-state fermentation (SSF) with sugarcane bagasse as substrate. SSF conditions were optimized by the surface response methodology after an initial screening of factors with significant effect on RP solubilization. The optimized levels of the factors were 865 mg of biochar, 250 mg of RP, 270 mg of sucrose and 6.2 ml of water per gram of bagasse. At this optimal setting, 8.6 mg of water-soluble P per gram of bagasse was achieved, representing an increase of 2.4 times over the non-optimized condition. The optimized SSF product was partially incinerated at 350°C (SB-350) and 500°C (SB-500) to reduce its volume and, consequently, increase P concentration. The post-processed formulations of the SSF product were evaluated in a soil–plant experiment. The formulations SB-350 and SB-500 increased the growth and P uptake of common bean plants (Phaseolus vulgaris L.) when compared with the non-treated RP. Furthermore, these two formulations had a yield relative to triple superphosphate of 60% (on a dry mass basis). Besides increasing P concentration, incineration improved the SSF product performance probably by decreasing microbial immobilization of nutrients during the decomposition of the remaining SSF substrate. The process proposed is a promising alternative for the management of P fertilization since it enables the utilization of low-solubility RPs and relies on the use of inexpensive materials. PMID:26112323
Mendes, Gilberto de Oliveira; da Silva, Nina Morena Rêgo Muniz; Anastácio, Thalita Cardoso; Vassilev, Nikolay Bojkov; Ribeiro, José Ivo; da Silva, Ivo Ribeiro; Costa, Maurício Dutra
2015-11-01
A biotechnological strategy for the production of an alternative P fertilizer is described in this work. The fertilizer was produced through rock phosphate (RP) solubilization by Aspergillus niger in a solid-state fermentation (SSF) with sugarcane bagasse as substrate. SSF conditions were optimized by the surface response methodology after an initial screening of factors with significant effect on RP solubilization. The optimized levels of the factors were 865 mg of biochar, 250 mg of RP, 270 mg of sucrose and 6.2 ml of water per gram of bagasse. At this optimal setting, 8.6 mg of water-soluble P per gram of bagasse was achieved, representing an increase of 2.4 times over the non-optimized condition. The optimized SSF product was partially incinerated at 350°C (SB-350) and 500°C (SB-500) to reduce its volume and, consequently, increase P concentration. The post-processed formulations of the SSF product were evaluated in a soil-plant experiment. The formulations SB-350 and SB-500 increased the growth and P uptake of common bean plants (Phaseolus vulgaris L.) when compared with the non-treated RP. Furthermore, these two formulations had a yield relative to triple superphosphate of 60% (on a dry mass basis). Besides increasing P concentration, incineration improved the SSF product performance probably by decreasing microbial immobilization of nutrients during the decomposition of the remaining SSF substrate. The process proposed is a promising alternative for the management of P fertilization since it enables the utilization of low-solubility RPs and relies on the use of inexpensive materials. © 2015 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Nominal effective immunoreaction volume of magnetic beads at single bead level.
Wang, Rui; Chen, Yuan; Fan, Kai; Ji, Feng; Wu, Jian; Yu, Yong-Hua
Immunomagnetic bead (IMB)-based enzyme-linked immunosorbent assay (ELISA) has been the tool frequently used for protein detection in research and clinical laboratories. For most ELISA reactions the recommended dosage of IMBs is usually according to their weight (mg) or mass fraction (w/v) instead of the bead number. Consequently, the processes occurring in the immediate vicinity of the IMBs have always been ignored by researchers and they cannot be revealed in detail during the ELISA reaction. In this paper, we established the relationship between number of IMBs and colorimetric results, and further proposed a new concept of "nominal effective immunoreaction volume (NEIV)" to characterize a single IMB during ELISA reaction. Results showed that the NEIV of a single IMB has a constant value, which is unrelated to the amount of beads and the concentration of antigen. Optimal results of the colorimetric ELISA are achieved when the incubation volume meets each IMB's NEIV and is no longer enhanced by increasing the incubation volume. Thus, the reliable and relatively precise number of IMBs for ELISA detection during practical application could be determined. Most importantly, a study using IMB's NEIV would lay the foundation for a kinetics analysis of IMBs and antigens for future study.
Machine learning for fab automated diagnostics
NASA Astrophysics Data System (ADS)
Giollo, Manuel; Lam, Auguste; Gkorou, Dimitra; Liu, Xing Lan; van Haren, Richard
2017-06-01
Process optimization depends largely on field engineer's knowledge and expertise. However, this practice turns out to be less sustainable due to the fab complexity which is continuously increasing in order to support the extreme miniaturization of Integrated Circuits. On the one hand, process optimization and root cause analysis of tools is necessary for a smooth fab operation. On the other hand, the growth in number of wafer processing steps is adding a considerable new source of noise which may have a significant impact at the nanometer scale. This paper explores the ability of historical process data and Machine Learning to support field engineers in production analysis and monitoring. We implement an automated workflow in order to analyze a large volume of information, and build a predictive model of overlay variation. The proposed workflow addresses significant problems that are typical in fab production, like missing measurements, small number of samples, confounding effects due to heterogeneity of data, and subpopulation effects. We evaluate the proposed workflow on a real usecase and we show that it is able to predict overlay excursions observed in Integrated Circuits manufacturing. The chosen design focuses on linear and interpretable models of the wafer history, which highlight the process steps that are causing defective products. This is a fundamental feature for diagnostics, as it supports process engineers in the continuous improvement of the production line.
Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.
1975-01-01
A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.
Geometric optimization of thermal systems
NASA Astrophysics Data System (ADS)
Alebrahim, Asad Mansour
2000-10-01
The work in chapter 1 extends to three dimensions and to convective heat transfer the constructal method of minimizing the thermal resistance between a volume and one point. In the first part, the heat flow mechanism is conduction, and the heat generating volume is occupied by low conductivity material (k 0) and high conductivity inserts (kp) that are shaped as constant-thickness disks mounted on a common stem of kp material. In the second part the interstitial spaces once occupied by k0 material are bathed by forced convection. The internal and external geometric aspect ratios of the elemental volume and the first assembly are optimized numerically subject to volume constraints. Chapter 2 presents the constrained thermodynamic optimization of a cross-flow heat exchanger with ram air on the cold side, which is used in the environmental control systems of aircraft. Optimized geometric features such as the ratio of channel spacings and flow lengths are reported. It is found that the optimized features are relatively insensitive to changes in other physical parameters of the installation and relatively insensitive to the additional irreversibility due to discharging the ram-air stream into the atmosphere, emphasizing the robustness of the thermodynamic optimum. In chapter 3 the problem of maximizing exergy extraction from a hot stream by distributing streams over a heat transfer surface is studied. In the first part, the cold stream is compressed in an isothermal compressor, expanded in an adiabatic turbine, and discharged into the ambient. In the second part, the cold stream is compressed in an adiabatic compressor. Both designs are optimized with respect to the capacity-rate imbalance of the counter-flow and the pressure ratio maintained by the compressor. This study shows the tradeoff between simplicity and increased performance, and outlines the path for further conceptual work on the extraction of exergy from a hot stream that is being cooled gradually. The aim of chapter 4 was to optimize the performance of a boot-strap air cycle of an environmental control system (ECS) for aircraft. New in the present study was that the optimization refers to the performance of the entire ECS system, not to the performance of an individual component. Also, there were two heat exchangers, not one, and their relative positions and sizes were not specified in advance. This study showed that geometric optimization can be identified when the optimization procedure refers to the performance of the entire ECS system, not to the performance of an individual component. This optimized features were robust relative to some physical parameters. This robustness may be used to simplify future optimization of similar systems.
Process influences and correction possibilities for high precision injection molded freeform optics
NASA Astrophysics Data System (ADS)
Dick, Lars; Risse, Stefan; Tünnermann, Andreas
2016-08-01
Modern injection molding processes offer a cost-efficient method for manufacturing high precision plastic optics for high volume applications. Besides form deviation of molded freeform optics, internal material stress is a relevant influencing factor for the functionality of a freeform optics in an optical system. This paper illustrates dominant influence parameters of an injection molding process relating to form deviation and internal material stress based on a freeform demonstrator geometry. Furthermore, a deterministic and efficient way for 3D mold correcting of systematic, asymmetrical shrinkage errors is shown to reach micrometer range shape accuracy at diameters up to 40 mm. In a second case, a stress-optimized parameter combination using unusual molding conditions was 3D corrected to reach high precision and low stress freeform polymer optics.
Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.
Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less
The numerical modelling of mixing phenomena of nanofluids in passive micromixers
NASA Astrophysics Data System (ADS)
Milotin, R.; Lelea, D.
2018-01-01
The paper deals with the rapid mixing phenomena in micro-mixing devices with four tangential injections and converging tube, considering nanoparticles and water as the base fluid. Several parameters like Reynolds number (Re = 6 - 284) or fluid temperature are considered in order to optimize the process and obtain fundamental insight in mixing phenomena. The set of partial differential equations is considered based on conservation of momentum and species. Commercial package software Ansys-Fluent is used for solution of differential equations, based on a finite volume method. The results reveal that mixing index and mixing process is strongly dependent both on Reynolds number and heat flux. Moreover there is a certain Reynolds number when flow instabilities are generated that intensify the mixing process due to the tangential injections of the fluids.
Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging
Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; ...
2017-04-20
Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less
Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.
Jiang, Z; Chen, W; Burkhart, C
2013-11-01
Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.; Dreim, D.; Okeefe, D. A.; Linner, L.; Pathak, S. K.; Reaser, J. S.; Richardson, D.; Sweers, J.; Conner, F.
1985-01-01
Work performed in the design of a transport aircraft wing for maximum fuel efficiency is documented with emphasis on design criteria, design methodology, and three design configurations. The design database includes complete finite element model description, sizing data, geometry data, loads data, and inertial data. A design process which satisfies the economics and practical aspects of a real design is illustrated. The cooperative study relationship between the contractor and NASA during the course of the contract is also discussed.
Speech Optimization at 9600 Bits/Second. Volume 2. Real-Time Software and Hardware.
1980-09-30
resumed as follows: q s A r c ACCA -- S rUP > When resumed, the task closes the MAP and exits. If a complete, two MAP system is desired, the process...A r - C 2 #1 L. II V. It 1, x It -- C C cr t9-tr it 2 - 9C--99 .I. 11 1 16 f6 .~ 4 2t V V t6 M W C It It 22~ ~~ .20 2 r2-~. z-. C22 z C Z ~ 2 O s~ 9
Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A
2008-06-01
The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.
Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes
2012-01-01
Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785
Implementation of an optimized microfluidic mixer in alumina employing femtosecond laser ablation
NASA Astrophysics Data System (ADS)
Juodėnas, M.; Tamulevičius, T.; Ulčinas, O.; Tamulevičius, S.
2018-01-01
Manipulation of liquids at the lowest levels of volume and dimension is at the forefront of materials science, chemistry and medicine, offering important time and resource saving applications. However, manipulation by mixing is troublesome at the microliter and lower scales. One approach to overcome this problem is to use passive mixers, which exploit structural obstacles within microfluidic channels or the geometry of channels themselves to enforce and enhance fluid mixing. Some applications require the manipulation and mixing of aggressive substances, which makes conventional microfluidic materials, along with their fabrication methods, inappropriate. In this work, implementation of an optimized full scale three port microfluidic mixer is presented in a slide of a material that is very hard to process but possesses extreme chemical and physical resistance—alumina. The viability of the selected femtosecond laser fabrication method as an alternative to conventional lithography methods, which are unable to process this material, is demonstrated. For the validation and optimization of the microfluidic mixer, a finite element method (FEM) based numerical modeling of the influence of the mixer geometry on its mixing performance is completed. Experimental investigation of the laminar flow geometry demonstrated very good agreement with the numerical simulation results. Such a laser ablation microfabricated passive mixer structure is intended for use in a capillary force assisted nanoparticle assembly setup (CAPA).
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System
Bhudolia, Somen K.; Perrotey, Pavel; Joshi, Sunil C.
2017-01-01
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium®. Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented. PMID:28772654
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System.
Bhudolia, Somen K; Perrotey, Pavel; Joshi, Sunil C
2017-03-15
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium ® . Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.
NASA Astrophysics Data System (ADS)
Thomas, L.; Tremblais, B.; David, L.
2014-03-01
Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.
NASA Technical Reports Server (NTRS)
Pan, Ning
1992-01-01
Although the question of minimum or critical fiber volume fraction beyond which a composite can then be strengthened due to addition of fibers has been dealt with by several investigators for both continuous and short fiber composites, a study of maximum or optimal fiber volume fraction at which the composite reaches its highest strength has not been reported yet. The present analysis has investigated this issue for short fiber case based on the well-known shear lag (the elastic stress transfer) theory as the first step. Using the relationships obtained, the minimum spacing between fibers is determined upon which the maximum fiber volume fraction can be calculated, depending on the fiber packing forms within the composites. The effects on the value of this maximum fiber volume fraction due to such factors as fiber and matrix properties, fiber aspect ratio and fiber packing forms are discussed. Furthermore, combined with the previous analysis on the minimum fiber volume fraction, this maximum fiber volume fraction can be used to examine the property compatibility of fiber and matrix in forming a composite. This is deemed to be useful for composite design. Finally some examples are provided to illustrate the results.
Xue, Juan Qin; Liu, Ni Na; Li, Guo Ping; Dang, Long Tao
To solve the disposal problem of cyanide wastewater, removal of cyanide from wastewater using a water-in-oil emulsion type of emulsion liquid membrane (ELM) was studied in this work. Specifically, the effects of surfactant Span-80, carrier trioctylamine (TOA), stripping agent NaOH solution and the emulsion-to-external-phase-volume ratio on removal of cyanide were investigated. Removal of total cyanide was determined using the silver nitrate titration method. Regression analysis and optimization of the conditions were conducted using the Design-Expert software and response surface methodology (RSM). The actual cyanide removals and the removals predicted using RSM analysis were in close agreement, and the optimal conditions were determined to be as follows: the volume fraction of Span-80, 4% (v/v); the volume fraction of TOA, 4% (v/v); the concentration of NaOH, 1% (w/v); and the emulsion-to-external-phase volume ratio, 1:7. Under the optimum conditions, the removal of total cyanide was 95.07%, and the RSM predicted removal was 94.90%, with a small exception. The treatment of cyanide wastewater using an ELM is an effective technique for application in industry.
SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smyth, G; Bamber, JC; Bedford, JL
2015-06-15
Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstemmore » (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.« less
Torres, Yadir; Lascano, Sheila; Bris, Jorge; Pavón, Juan; Rodriguez, José A
2014-04-01
One of the most important concerns in long-term prostheses is bone resorption as a result of the stress shielding due to stiffness mismatch between bone and implant. The aim of this study was to obtain porous titanium with stiffness values similar to that exhibited by cortical bone. Porous samples of commercial pure titanium grade-4 were obtained by following both loose-sintering processing and space-holder technique with NaCl between 40 and 70% in volume fraction. Both mechanical properties and porosity morphology were assessed. Young's modulus was measured using uniaxial compression testing, as well as ultrasound methodology. Complete characterization and mechanical testing results allowed us to determine some important findings: (i) optimal parameters for both processing routes; (ii) better mechanical response was obtained by using space-holder technique; (iii) pore geometry of loose sintering samples becomes more regular with increasing sintering temperature; in the case of the space-holder technique that trend was observed for decreasing volume fraction; (iv) most reliable Young's modulus measurements were achieved by ultrasound technique. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogila, K. O.; Yang, W.; Shao, M.; Tan, J.
2017-05-01
Unsaturated Polyester Resin is a versatile and cost efficient thermosetting plastic whose application in rotational molding is currently limited by its relatively high initial viscosity and heat of reaction. These material characteristics result in uneven material distribution, poor surface finish and imperfections in the moldings especially when large wall thicknesses are required. The current work attempts to remedy these shortcomings through the development of a continuous multi-shot system which adds predetermined loads of unsaturated polyester resin into a rotating mold at various intervals. As part of this system, a laboratory-scale uniaxial rotational molding machine was used to produce Unsaturated Polyester Resin moldings in single and double shots. Optimal processing conditions were determined through visual studies, three dimensional microscopic studies, thickness distribution analysis and Fourier Transform Infrared spectroscopy. Volume filling fractions of 0.049-0.065, second shot volumes of 0.5-0.75 from the first shot, rotational speeds of 15-20 rpm and temperatures of 30-50 °C resulted in moldings of suitable quality on both the inner and outer surfaces.
Kołacińska, Kamila; Chajduk, Ewelina; Dudek, Jakub; Samczyński, Zbigniew; Łokas, Edyta; Bojanowska-Czajka, Anna; Trojanowicz, Marek
2017-07-01
90 Sr is a widely determined radionuclide for environmental purposes, nuclear waste control, and can be also monitored in coolants in nuclear reactor plants. In the developed method, the ICP-MS detection was employed together with sample processing in sequential injection analysis (SIA) setup, equipped with a lab-on-valve with mechanized renewal of sorbent bed for solid-phase extraction. The optimized conditions of determination included preconcentration of 90 Sr on cation-exchange column and removal of different type of interferences using extraction Sr-resin. The limit of detection of the developed procedure depends essentially on the configuration of the employed ICP-MS spectrometer and on the available volume of the sample to be analyzed. For 1L initial sample volume, the method detection limit (MDL) value was evaluated as 2.9ppq (14.5BqL -1 ). The developed method was applied to analyze spiked river water samples, water reference materials, and also simulated and real samples of the nuclear reactor coolant. Copyright © 2016 Elsevier B.V. All rights reserved.
Pore water sampling in acid sulfate soils: a new peeper method.
Johnston, Scott G; Burton, Edward D; Keene, Annabelle F; Bush, Richard T; Sullivan, Leigh A; Isaacson, Lloyd
2009-01-01
This study describes the design, deployment, and application of a modified equilibration dialysis device (peeper) optimized for sampling pore waters in acid sulfate soils (ASS). The modified design overcomes the limitations of traditional-style peepers, when sampling firm ASS materials over relatively large depth intervals. The new peeper device uses removable, individual cells of 25 mL volume housed in a 1.5 m long rigid, high-density polyethylene rod. The rigid housing structure allows the device to be inserted directly into relatively firm soils without requiring a supporting frame. The use of removable cells eliminates the need for a large glove-box after peeper retrieval, thus simplifying physical handling. Removable cells are easily maintained in an inert atmosphere during sample processing and the 25-mL sample volume is sufficient for undertaking multiple analyses. A field evaluation of equilibration times indicates that 32 to 38 d of deployment was necessary. Overall, the modified method is simple and effective and well suited to acquisition and processing of redox-sensitive pore water profiles>1 m deep in acid sulfate soil or any other firm wetland soils.
NASA Astrophysics Data System (ADS)
Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.
2018-05-01
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
Hayn, Dieter; Kreiner, Karl; Ebner, Hubert; Kastner, Peter; Breznik, Nada; Rzepka, Angelika; Hofmann, Axel; Gombotz, Hans; Schreier, Günter
2017-06-14
Blood transfusion is a highly prevalent procedure in hospitalized patients and in some clinical scenarios it has lifesaving potential. However, in most cases transfusion is administered to hemodynamically stable patients with no benefit, but increased odds of adverse patient outcomes and substantial direct and indirect cost. Therefore, the concept of Patient Blood Management has increasingly gained importance to pre-empt and reduce transfusion and to identify the optimal transfusion volume for an individual patient when transfusion is indicated. It was our aim to describe, how predictive modeling and machine learning tools applied on pre-operative data can be used to predict the amount of red blood cells to be transfused during surgery and to prospectively optimize blood ordering schedules. In addition, the data derived from the predictive models should be used to benchmark different hospitals concerning their blood transfusion patterns. 6,530 case records obtained for elective surgeries from 16 centers taking part in two studies conducted in 2004-2005 and 2009-2010 were analyzed. Transfused red blood cell volume was predicted using random forests. Separate models were trained for overall data, for each center and for each of the two studies. Important characteristics of different models were compared with one another. Our results indicate that predictive modeling applied prior surgery can predict the transfused volume of red blood cells more accurately (correlation coefficient cc = 0.61) than state of the art algorithms (cc = 0.39). We found significantly different patterns of feature importance a) in different hospitals and b) between study 1 and study 2. We conclude that predictive modeling can be used to benchmark the importance of different features on the models derived with data from different hospitals. This might help to optimize crucial processes in a specific hospital, even in other scenarios beyond Patient Blood Management.
Babier, Aaron; Boutilier, Justin J; Sharpe, Michael B; McNiven, Andrea L; Chan, Timothy C Y
2018-05-10
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Purdie, Thomas G.
2017-01-01
Automating the radiotherapy treatment planning process is a technically challenging problem. The majority of automated approaches have focused on customizing and inferring dose volume objectives to be used in plan optimization. In this work we outline a multi-patient atlas-based dose prediction approach that learns to predict the dose-per-voxel for a novel patient directly from the computed tomography planning scan without the requirement of specifying any objectives. Our method learns to automatically select the most effective atlases for a novel patient, and then map the dose from those atlases onto the novel patient. We extend our previous work to include a conditional random field for the optimization of a joint distribution prior that matches the complementary goals of an accurately spatially distributed dose distribution while still adhering to the desired dose volume histograms. The resulting distribution can then be used for inverse-planning with a new spatial dose objective, or to create typical dose volume objectives for the canonical optimization pipeline. We investigated six treatment sites (633 patients for training and 113 patients for testing) and evaluated the mean absolute difference in all DVHs for the clinical and predicted dose distribution. The results on average are favorable in comparison to our previous approach (1.91 versus 2.57). Comparing our method with and without atlas-selection further validates that atlas-selection improved dose prediction on average in whole breast (0.64 versus 1.59), prostate (2.13 versus 4.07), and rectum (1.46 versus 3.29) while it is less important in breast cavity (0.79 versus 0.92) and lung (1.33 versus 1.27) for which there is high conformity and minimal dose shaping. In CNS brain, atlas-selection has the potential to be impactful (3.65 versus 5.09), but selecting the ideal atlas is the most challenging.
Quality assessment for VMAT prostate radiotherapy planning based on data envelopment analysis
NASA Astrophysics Data System (ADS)
Lin, Kuan-Min; Simpson, John; Sasso, Giuseppe; Raith, Andrea; Ehrgott, Matthias
2013-08-01
The majority of commercial radiotherapy treatment planning systems requires planners to iteratively adjust the plan parameters in order to find a satisfactory plan. This iterative trial-and-error nature of radiotherapy treatment planning results in an inefficient planning process and in order to reduce such inefficiency, plans can be accepted without achieving the best attainable quality. We propose a quality assessment method based on data envelopment analysis (DEA) to address this inefficiency. This method compares a plan of interest to a set of past delivered plans and searches for evidence of potential further improvement. With the assistance of DEA, planners will be able to make informed decisions on whether further planning is required and ensure that a plan is only accepted when the plan quality is close to the best attainable one. We apply the DEA method to 37 prostate plans using two assessment parameters: rectal generalized equivalent uniform dose (gEUD) as the input and D95 (the minimum dose that is received by 95% volume of a structure) of the planning target volume (PTV) as the output. The percentage volume of rectum overlapping PTV is used to account for anatomical variations between patients and is included in the model as a non-discretionary output variable. Five plans that are considered of lesser quality by DEA are re-optimized with the goal to further improve rectal sparing. After re-optimization, all five plans improve in rectal gEUD without clinically considerable deterioration of the PTV D95 value. For the five re-optimized plans, the rectal gEUD is reduced by an average of 1.84 Gray (Gy) with only an average reduction of 0.07 Gy in PTV D95. The results demonstrate that DEA can correctly identify plans with potential improvements in terms of the chosen input and outputs.
NASA Astrophysics Data System (ADS)
Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward
2017-03-01
Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
Equivalent Air Spring Suspension Model for Quarter-Passive Model of Passenger Vehicles.
Abid, Haider J; Chen, Jie; Nassar, Ameen A
2015-01-01
This paper investigates the GENSIS air spring suspension system equivalence to a passive suspension system. The SIMULINK simulation together with the OptiY optimization is used to obtain the air spring suspension model equivalent to passive suspension system, where the car body response difference from both systems with the same road profile inputs is used as the objective function for optimization (OptiY program). The parameters of air spring system such as initial pressure, volume of bag, length of surge pipe, diameter of surge pipe, and volume of reservoir are obtained from optimization. The simulation results show that the air spring suspension equivalent system can produce responses very close to the passive suspension system.
NASA Technical Reports Server (NTRS)
1975-01-01
The Model is described along with data preparation, determining model parameters, initializing and optimizing parameters (calibration) selecting control options and interpreting results. Some background information is included, and appendices contain a dictionary of variables, a source program listing, and flow charts. The model was operated on an IBM System/360 Model 44, using a model 2250 keyboard/graphics terminal for interactive operation. The model can be set up and operated in a batch processing mode on any System/360 or 370 that has the memory capacity. The model requires 210K bytes of core storage, and the optimization program, OPSET (which was used previous to but not in this study), requires 240K bytes. The data band for one small watershed requires approximately 32 tracks of disk storage.
Lighting design for globally illuminated volume rendering.
Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.
NASA Astrophysics Data System (ADS)
Hope, Adam T.
Many nuclear reactor components previously constructed with Ni-based alloys containing 20 wt% Cr have been found to be susceptible to stress corrosion cracking. The nuclear power industry now uses high chromium (˜30wt%) Ni-based filler metals to mitigate stress corrosion cracking. Current alloys are plagued with weldability issues, either solidification cracking or ductility dip cracking (DDC). Solidification cracking is related to solidification temperature range and the DDC is related to the fraction eutectic present in the microstructure. It was determined that an optimal alloy should have a solidification temperature range less than 150°C and at least 2% volume fraction eutectic. Due to the nature of the Nb rich eutectic that forms, it is difficult to avoid both cracking types simultaneously. Through computational modeling, alternative eutectic forming elements, Hf and Ta, have been identified as replacements for Nb in such alloys. Compositions have been optimized through a combination of computational and experimental techniques combined with a design of experiment methodology. Small buttons were melted using commercially pure materials in a copper hearth to obtain the desired compositions. These buttons were then subjected to a gas tungsten arc spot weld. A type C thermocouple was used to acquire the cooling history during the solidification process. The cooling curves were processed using Single Sensor Differential Thermal Analysis to determine the solidification temperature range, and indicator of solidification cracking susceptibility. Metallography was performed to determine the fraction eutectic present, an indicator of DDC resistance. The optimal level of Hf to resist cracking was found to be 0.25 wt%. The optimal level of Ta was found to be 4 wt%. gamma/MC type eutectics were found to form first in all Nb, Ta, and Hf-bearing compositions. Depending on Fe and Cr content, gamma/Laves eutectic was sometimes found in Nb and Ta-bearing compositions, while Hf-bearing compositions had gamma/Ni7Hf2 as the final eutectic to solidify. This study found that the extra Cr in the current generation alloys promotes the gamma/Laves phase eutectic, which expands the solidification temperature range and promotes solidification cracking. Both Ta-bearing and Hf-bearing eutectics were found to solidify at higher temperatures than Nb-bearing eutectics, leading to narrower solidification temperature ranges. Weldability testing on the optimized Ta-bearing compositions revealed good resistance to both DDC and solidification cracking. Unexpectedly, the optimized Hf-bearing compositions were quite susceptible to solidification cracking. This led to an investigation on the possible wetting effect of eutectics on solidification cracking susceptibly, and a theory on how wetting affects the solidification crack susceptibility and the volume fraction of eutectic needed for crack healing has been proposed. Alloys with eutectics that easily wet the grain boundaries have increased solidification crack susceptibility at low volume fraction eutectics, but as the fraction eutectic is increased, experience crack healing at relatively lower fraction eutectics than alloys with eutectics that don't wet as easily. Hf rich eutectics were found to wet grain boundaries significantly more than Nb rich eutectics. Additions of Mo were also found to increase the wetting of eutectics in Nb-bearing alloys.
NASA Astrophysics Data System (ADS)
Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong
2017-03-01
We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.
Optimization Routine for Generating Medical Kits for Spaceflight Using the Integrated Medical Model
NASA Technical Reports Server (NTRS)
Graham, Kimberli; Myers, Jerry; Goodenow, Deb
2017-01-01
The Integrated Medical Model (IMM) is a MATLAB model that provides probabilistic assessment of the medical risk associated with human spaceflight missions.Different simulations or profiles can be run in which input conditions regarding both mission characteristics and crew characteristics may vary. For each simulation, the IMM records the total medical events that occur and “treats” each event with resources drawn from import scripts. IMM outputs include Total Medical Events (TME), Crew Health Index (CHI), probability of Evacuation (pEVAC), and probability of Loss of Crew Life (pLOCL).The Crew Health Index is determined by the amount of quality time lost (QTL). Previously, an optimization code was implemented in order to efficiently generate medical kits. The kits were optimized to have the greatest benefit possible, given amass and/or volume constraint. A 6-crew, 14-day lunar mission was chosen for the simulation and run through the IMM for 100,000 trials. A built-in MATLAB solver, mixed-integer linear programming, was used for the optimization routine. Kits were generated in 10% increments ranging from 10%-100% of the benefit constraints. Conditions wheremass alone was minimized, volume alone was minimized, and where mass and volume were minimizedjointly were tested.
JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.
Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter
2016-01-01
Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.
Rantner, Lukas J; Vadakkumpadan, Fijoy; Spevak, Philip J; Crosson, Jane E; Trayanova, Natalia A
2013-01-01
There is currently no reliable way of predicting the optimal implantable cardioverter-defibrillator (ICD) placement in paediatric and congenital heart defect (CHD) patients. This study aimed to: (1) develop a new image processing pipeline for constructing patient-specific heart–torso models from clinical magnetic resonance images (MRIs); (2) use the pipeline to determine the optimal ICD configuration in a paediatric tricuspid valve atresia patient; (3) establish whether the widely used criterion of shock-induced extracellular potential (Φe) gradients ≥5 V cm−1 in ≥95% of ventricular volume predicts defibrillation success. A biophysically detailed heart–torso model was generated from patient MRIs. Because transvenous access was impossible, three subcutaneous and three epicardial lead placement sites were identified along with five ICD scan locations. Ventricular fibrillation was induced, and defibrillation shocks were applied from 11 ICD configurations to determine defibrillation thresholds (DFTs). Two configurations with epicardial leads resulted in the lowest DFTs overall and were thus considered optimal. Three configurations shared the lowest DFT among subcutaneous lead ICDs. The Φe gradient criterion was an inadequate predictor of defibrillation success, as defibrillation failed in numerous instances even when 100% of the myocardium experienced such gradients. In conclusion, we have developed a new image processing pipeline and applied it to a CHD patient to construct the first active heart–torso model from clinical MRIs. PMID:23798492
Woodford, Katrina; Panettieri, Vanessa; Ruben, Jeremy D; Senthi, Sashendra
2016-05-01
Intensity modulated radiotherapy (IMRT) is routinely utilized in the treatment of locally advanced non-small cell lung cancer (NSCLC). RTOG 0617 found that overall survival was impacted by increased low (5 Gy) and intermediate (30 Gy) cardiac doses. We evaluated the impact of esophageal-sparing IMRT on cardiac doses with and without the heart considered in the planning process and predicted toxicity compared to 3D-conventional radiotherapy (3DCRT). Ten consecutive patients with N2 Stage III NSCLC treated to 60 Gy in 30 fractions, between February 2012 and September 2014, were evaluated. For each patient, 3DCRT and esophageal-sparing IMRT plans were generated. IMRT plans were then created with and without the heart considered in the optimization process. To compare plans, the dose delivered to 95% and 99% of the target (D95% and D99%), and doses to the esophagus, lung and heart were compared by determining the volume receiving X dose (VXGy) and the normal tissue complication probability (NTCP) calculated. IMRT reduced maximum esophagus dose to below 60 Gy in all patients and produced significant reductions to V50Gy, V40Gy and esophageal NTCP. The cost of this reduction was a non-statistically, non-clinically significant increase in low dose (5 Gy) lung exposure that did not worsen lung NTCP. IMRT plans produced significant cardiac sparing, with the amount of improvement correlating to the amount of heart overlapping with the target. When included in plan optimization, for selected patients further sparing of the heart and improvement in heart NTCP was possible. Esophageal-sparing IMRT can significantly spare the heart even if it is not considered in the optimization process. Further sparing can be achieved if plan optimization constrains low and intermediate heart doses, without compromising lung doses.
Xi-cam: Flexible High Throughput Data Processing for GISAXS
NASA Astrophysics Data System (ADS)
Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander
With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.
ICALEO '91 - Laser materials processing; Proceedings of the Meeting, San Jose, CA, Nov. 3-8, 1991
NASA Astrophysics Data System (ADS)
Metzbower, Edward A.; Beyer, Eckhard; Matsunawa, Akira
Consideration is given to new developments in LASERCAV technology, modeling of deep penetration laser welding, the theory of radiative transfer in the plasma of the keyhole in penetration laser welding, a synchronized laser-video camera system study of high power laser material interactions, laser process monitoring with dual wavelength optical sensors, new devices for on-line process diagnostics during laser machining, and the process development for a portable Nd:YAG laser materials processing system. Attention is also given to laser welding of alumina-reinforced 6061 aluminum alloy composite, the new trend of laser materials processing, optimization of the laser cutting process for thin section stainless steels, a new nozzle concept for cutting with high power lasers, rapid solidification effects during laser welding, laser surface modification of a low carbon steel with tungsten carbide and carbon, absorptivity of a polarized beam during laser hardening, and laser surface melting of 440 C tool steel. (No individual items are abstracted in this volume)
NASA Astrophysics Data System (ADS)
Winkel, D.; Bol, G. H.; van Asselen, B.; Hes, J.; Scholten, V.; Kerkmeijer, L. G. W.; Raaymakers, B. W.
2016-12-01
To develop an automated radiotherapy treatment planning and optimization workflow to efficiently create patient specifically optimized clinical grade treatment plans for prostate cancer and to implement it in clinical practice. A two-phased planning and optimization workflow was developed to automatically generate 77Gy 5-field simultaneously integrated boost intensity modulated radiation therapy (SIB-IMRT) plans for prostate cancer treatment. A retrospective planning study (n = 100) was performed in which automatically and manually generated treatment plans were compared. A clinical pilot (n = 21) was performed to investigate the usability of our method. Operator time for the planning process was reduced to <5 min. The retrospective planning study showed that 98 plans met all clinical constraints. Significant improvements were made in the volume receiving 72Gy (V72Gy) for the bladder and rectum and the mean dose of the bladder and the body. A reduced plan variance was observed. During the clinical pilot 20 automatically generated plans met all constraints and 17 plans were selected for treatment. The automated radiotherapy treatment planning and optimization workflow is capable of efficiently generating patient specifically optimized and improved clinical grade plans. It has now been adopted as the current standard workflow in our clinic to generate treatment plans for prostate cancer.
Optimization of HTS superconducting magnetic energy storage magnet volume
NASA Astrophysics Data System (ADS)
Korpela, Aki; Lehtonen, Jorma; Mikkonen, Risto
2003-08-01
Nonlinear optimization problems in the field of electromagnetics have been successfully solved by means of sequential quadratic programming (SQP) and the finite element method (FEM). For example, the combination of SQP and FEM has been proven to be an efficient tool in the optimization of low temperature superconductors (LTS) superconducting magnetic energy storage (SMES) magnets. The procedure can also be applied for the optimization of HTS magnets. However, due to a strongly anisotropic material and a slanted electric field, current density characteristic high temperature superconductors HTS optimization is quite different from that of the LTS. In this paper the volumes of solenoidal conduction-cooled Bi-2223/Ag SMES magnets have been optimized at the operation temperature of 20 K. In addition to the electromagnetic constraints the stress caused by the tape bending has also been taken into account. Several optimization runs with different initial geometries were performed in order to find the best possible solution for a certain energy requirement. The optimization constraints describe the steady-state operation, thus the presented coil geometries are designed for slow ramping rates. Different energy requirements were investigated in order to find the energy dependence of the design parameters of optimized solenoidal HTS coils. According to the results, these dependences can be described with polynomial expressions.
Nesvacil, Nicole; Schmid, Maximilian P; Pötter, Richard; Kronreif, Gernot; Kirisits, Christian
To investigate the feasibility of a treatment planning workflow for three-dimensional image-guided cervix cancer brachytherapy, combining volumetric transrectal ultrasound (TRUS) for target definition with CT for dose optimization to organs at risk (OARs), for settings with no access to MRI. A workflow for TRUS/CT-based volumetric treatment planning was developed, based on a customized system including ultrasound probe, stepper unit, and software for image volume acquisition. A full TRUS/CT-based workflow was simulated in a clinical case and compared with MR- or CT-only delineation. High-risk clinical target volume was delineated on TRUS, and OARs were delineated on CT. Manually defined tandem/ring applicator positions on TRUS and CT were used as a reference for rigid registration of the image volumes. Treatment plan optimization for TRUS target and CT organ volumes was performed and compared to MRI and CT target contours. TRUS/CT-based contouring, applicator reconstruction, image fusion, and treatment planning were feasible, and the full workflow could be successfully demonstrated. The TRUS/CT plan fulfilled all clinical planning aims. Dose-volume histogram evaluation of the TRUS/CT-optimized plan (high-risk clinical target volume D 90 , OARs D 2cm³ for) on different image modalities showed good agreement between dose values reported for TRUS/CT and MRI-only reference contours and large deviations for CT-only target parameters. A TRUS/CT-based workflow for full three-dimensional image-guided cervix brachytherapy treatment planning seems feasible and may be clinically comparable to MRI-based treatment planning. Further development to solve challenges with applicator definition in the TRUS volume is required before systematic applicability of this workflow. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Chinea, Felix M; Lyapichev, Kirill; Epstein, Jonathan I; Kwon, Deukwoo; Smith, Paul Taylor; Pollack, Alan; Cote, Richard J; Kryvenko, Oleksandr N
2017-03-28
To address health disparities in risk stratification of U.S. Hispanic/Latino men by characterizing influences of prostate weight, body mass index, and race/ethnicity on the correlation of PSA derivatives with Gleason score 6 (Grade Group 1) tumor volume in a diverse cohort. Using published PSA density and PSA mass density cutoff values, men with higher body mass indices and prostate weights were less likely to have a tumor volume <0.5 cm3. Variability across race/ethnicity was found in the univariable analysis for all PSA derivatives when predicting for tumor volume. In receiver operator characteristic analysis, area under the curve values for all PSA derivatives varied across race/ethnicity with lower optimal cutoff values for Hispanic/Latino (PSA=2.79, PSA density=0.06, PSA mass=0.37, PSA mass density=0.011) and Non-Hispanic Black (PSA=3.75, PSA density=0.07, PSA mass=0.46, PSA mass density=0.008) compared to Non-Hispanic White men (PSA=4.20, PSA density=0.11 PSA mass=0.53, PSA mass density=0.014). We retrospectively analyzed 589 patients with low-risk prostate cancer at radical prostatectomy. Pre-operative PSA, patient height, body weight, and prostate weight were used to calculate all PSA derivatives. Receiver operating characteristic curves were constructed for each PSA derivative per racial/ethnic group to establish optimal cutoff values predicting for tumor volume ≥0.5 cm3. Increasing prostate weight and body mass index negatively influence PSA derivatives for predicting tumor volume. PSA derivatives' ability to predict tumor volume varies significantly across race/ethnicity. Hispanic/Latino and Non-Hispanic Black men have lower optimal cutoff values for all PSA derivatives, which may impact risk assessment for prostate cancer.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Dosimetric comparison between VMAT and RC3D techniques: case of prostate treatment
NASA Astrophysics Data System (ADS)
Chemingui, Fatima Zohra; Benrachi, Fatima; Bali, Mohamed Saleh; Ladjal, Hamid
2017-09-01
Considered as the second men cancer in Algeria, prostate cancer is treated in 70% by radiation. That's why radiation therapy is therapeutic weapon for prostate cancer. Conformational Radiotherapy in 3D is the most common technique [1-5]. The use of conventionally optimized treatment plans was compared at case scenario of optimized treatment plans VMAT for prostate cancer. The evaluation of the two optimizations strategies focused on the resulting plans ability to retain dose objectives under the influence of patient set up. Dose Volume Histogram in the Planning Target Volume and dose in the Organs At Risks were used to calculate the conformity index, and evaluation ratio of irradiated volume which represent the main tool of comparison [6,7]. The situation was analysed systematically. The 14% dose increase in the target leads to a decrease in the dose in adjacent organs with 39% in the bladder. Therefore, the criterion for better efficacy and less toxicity reveal that VMAT is the best choice.
Gruber, Pia; Carvalho, Filipe; Marques, Marco P. C.; O'Sullivan, Brian; Subrizi, Fabiana; Dobrijevic, Dragana; Ward, John; Hailes, Helen C.; Fernandes, Pedro; Wohlgemuth, Roland; Baganz, Frank
2017-01-01
Abstract Rapid biocatalytic process development and intensification continues to be challenging with currently available methods. Chiral amino‐alcohols are of particular interest as they represent key industrial synthons for the production of complex molecules and optically pure pharmaceuticals. (2S,3R)‐2‐amino‐1,3,4‐butanetriol (ABT), a building block for the synthesis of protease inhibitors and detoxifying agents, can be synthesized from simple, non‐chiral starting materials, by coupling a transketolase‐ and a transaminase‐catalyzed reaction. However, until today, full conversion has not been shown and, typically, long reaction times are reported, making process modifications and improvement challenging. In this contribution, we present a novel microreactor‐based approach based on free enzymes, and we report for the first time full conversion of ABT in a coupled enzyme cascade for both batch and continuous‐flow systems. Using the compartmentalization of the reactions afforded by the microreactor cascade, we overcame inhibitory effects, increased the activity per unit volume, and optimized individual reaction conditions. The transketolase‐catalyzed reaction was completed in under 10 min with a volumetric activity of 3.25 U ml−1. Following optimization of the transaminase‐catalyzed reaction, a volumetric activity of 10.8 U ml−1 was attained which led to full conversion of the coupled reaction in 2 hr. The presented approach illustrates how continuous‐flow microreactors can be applied for the design and optimization of biocatalytic processes. PMID:28986983
Integrated Medical Model (IMM) Optimization Version 4.0 Functional Improvements
NASA Technical Reports Server (NTRS)
Arellano, John; Young, M.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.
2016-01-01
The IMMs ability to assess mission outcome risk levels relative to available resources provides a unique capability to provide guidance on optimal operational medical kit and vehicle resources. Post-processing optimization allows IMM to optimize essential resources to improve a specific model outcome such as maximization of the Crew Health Index (CHI), or minimization of the probability of evacuation (EVAC) or the loss of crew life (LOCL). Mass and or volume constrain the optimized resource set. The IMMs probabilistic simulation uses input data on one hundred medical conditions to simulate medical events that may occur in spaceflight, the resources required to treat those events, and the resulting impact to the mission based on specific crew and mission characteristics. Because IMM version 4.0 provides for partial treatment for medical events, IMM Optimization 4.0 scores resources at the individual resource unit increment level as opposed to the full condition-specific treatment set level, as done in version 3.0. This allows the inclusion of as many resources as possible in the event that an entire set of resources called out for treatment cannot satisfy the constraints. IMM Optimization version 4.0 adds capabilities that increase efficiency by creating multiple resource sets based on differing constraints and priorities, CHI, EVAC, or LOCL. It also provides sets of resources that improve mission-related IMM v4.0 outputs with improved performance compared to the prior optimization. The new optimization represents much improved fidelity that will improve the utility of the IMM 4.0 for decision support.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
2015-02-09
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
2015-01-01
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible. PMID:25836207
NASA Astrophysics Data System (ADS)
Lin, Dongguo; Kang, Tae Gon; Han, Jun Sae; Park, Seong Jin; Chung, Seong Taek; Kwon, Young-Sam
2018-02-01
Both experimental and numerical analysis of powder injection molding (PIM) of Ti-6Al-4V alloy were performed to prepare a defect-free high-performance Ti-6Al-4V part with low carbon/oxygen contents. The prepared feedstock was characterized with specific experiments to identify its viscosity, pressure-volume-temperature and thermal properties to simulate its injection molding process. A finite-element-based numerical scheme was employed to simulate the thermomechanical process during the injection molding. In addition, the injection molding, debinding, sintering and hot isostatic pressing processes were performed in sequence to prepare the PIMed parts. With optimized processing conditions, the PIMed Ti-6Al-4V part exhibits excellent physical and mechanical properties, showing a final density of 99.8%, tensile strength of 973 MPa and elongation of 16%.
Effect of processing paddy on digestibility of rice starch by in vitro studies.
Chitra, M; Singh, Vasudeva; Ali, S Z
2010-08-01
Paddy (Oryza sativa L) (variety 'IR - 64'), was parboiled, puffed by sand roasting and flaked by edge runner and roller flaker and variations in physical and physicochemical properties were studied. Moisture contents were lower (5.8-10.8%) in processed rice products compared to raw materials (11.8%). Ratio of rice to sand in the case of puffed rice preparation was optimized. The equilibrium moisture content was 27.4% in raw rice while it was much higher (38.9-81.0%) in processed rice. Sedimentation volume was lowest (6.2 ml) in raw rice and highest (18.8 ml) in popped rice. Starch content was 84.8 and 76.5-83% in raw and processed rice, respectively. In vitro starch digestibility was highest in roller flaker flakes and lowest in raw milled rice. Among the ready to eat products, popped rice showed least starch digestibility (∼30%).
NASA Astrophysics Data System (ADS)
Watters, Arianna L.; Palmese, Giuseppe R.
2014-09-01
Uniform dispersion of single walled carbon nanotubes (SWNTs) in an epoxy was achieved by a streamlined mechano-chemical processing method. SWNT-epoxy composites were synthesized using a room temperature ionic liquid (IL) with an imidazolium cation and dicyanamide anion. The novel approach of using ionic liquid that behaves as a dispersant for SWNTs and initiator for epoxy polymerization greatly simplifies nanocomposite synthesis. The material was processed using simple and scalable three roll milling. The SWNT dispersion of the resultant composite was evaluated by electron microscopy and electrical conductivity measurements in conjunction with percolation theory. Processing conditions were optimized to achieve the lowest possible percolation threshold, 4.29 × 10-5 volume fraction SWNTs. This percolation threshold is among the best reported in literature yet it was obtained using a streamlined method that greatly simplifies processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Victor, E-mail: vhernandezmasgrau@gmail.com; Arenas, Meritxell; Müller, Katrin
2013-01-01
To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statisticalmore » analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.« less
McConnel, M B; Galligan, D T
2004-10-01
Optimization programs are currently used to aid in the selection of bulls to be used in herd breeding programs. While these programs offer a systematic approach to the problem of semen selection, they ignore the impact of volume discounts. Volume discounts are discounts that vary depending on the number of straws purchased. The dynamic nature of volume discounts means that, in order to be adequately accounted for, they must be considered in the optimization routine. Failing to do this creates a missed economic opportunity because the potential benefits of optimally selecting and combining breeding company discount opportunities are not captured. To address these issues, an integer program was created which used binary decision variables to incorporate the effects of quantity discounts into the optimization program. A consistent set of trait criteria was used to select a group of bulls from 3 sample breeding companies. Three different selection programs were used to select the bulls, 2 traditional methods and the integer method. After the discounts were applied using each method, the integer program resulted in the lowest cost portfolio of bulls. A sensitivity analysis showed that the integer program also resulted in a low cost portfolio when the genetic trait goals were changed to be more or less stringent. In the sample application, a net benefit of the new approach over the traditional approaches was a 12.3 to 20.0% savings in semen cost.
NASA Astrophysics Data System (ADS)
Bailey, Brent Andrew
Structural designs by humans and nature are wholly distinct in their approaches. Engineers model components to verify that all mechanical requirements are satisfied before assembling a product. Nature, on the other hand; creates holistically: each part evolves in conjunction with the others. The present work is a synthesis of these two design approaches; namely, spatial models that evolve. Topology optimization determines the amount and distribution of material within a model; which corresponds to the optimal connectedness and shape of a structure. Smooth designs are obtained by using higher-order B-splines in the definition of the material distribution. Higher-fidelity is achieved using adaptive meshing techniques at the interface between solid and void. Nature is an exemplary basis for mass minimization, as processing material requires both resources and energy. Topological optimization techniques were originally formulated as the maximization of the structural stiffness subject to a volume constraint. This research inverts the optimization problem: the mass is minimized subject to deflection constraints. Active materials allow a structure to interact with its environment in a manner similar to muscles and sensory organs in animals. By specifying the material properties and design requirements, adaptive structures with integrated sensors and actuators can evolve.
NASA Astrophysics Data System (ADS)
Sung, Hae-Jin; Kim, Gyeong-Hun; Kim, Kwangmin; Park, Minwon; Yu, In-Keun; Kim, Jong-Yul
2013-11-01
Wind turbine concepts can be classified into the geared type and the gearless type. The gearless type wind turbine is more attractive due to advantages of simplified drive train and increased energy yield, and higher reliability because the gearbox is omitted. In addition, this type resolves the weight issue of the wind turbine with the light weight of gearbox. However, because of the low speed operation, this type has disadvantage such as the large diameter and heavy weight of generator. Super-Conducting (SC) wind power generator can reduce the weight and volume of a wind power system. Properties of superconducting wire are very different from each company. This paper considers the design and comparative analysis of 10 MW class SC wind power generators according to different types of SC wires. Super-Conducting Synchronous Generators (SCSGs) using YBCO and Bi-2223 wires are optimized by an optimal method. The magnetic characteristics of the SCSGs are investigated using the finite elements method program. The optimized specifications of the SCSGs are discussed in detail, and the optimization processes can be used effectively to develop large scale wind power generation systems.
NASA Astrophysics Data System (ADS)
Xuan, Li; He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai
2016-09-01
Multi-conjugation adaptive optics (MCAOs) have been investigated and used in the large aperture optical telescopes for high-resolution imaging with large field of view (FOV). The atmospheric tomographic phase reconstruction and projection of three-dimensional turbulence volume onto wavefront correctors, such as deformable mirrors (DMs) or liquid crystal wavefront correctors (LCWCs), is a very important step in the data processing of an MCAO’s controller. In this paper, a method according to the wavefront reconstruction performance of MCAO is presented to evaluate the optimized configuration of multi laser guide stars (LGSs) and the reasonable conjugation heights of LCWCs. Analytical formulations are derived for the different configurations and are used to generate optimized parameters for MCAO. Several examples are given to demonstrate our LGSs configuration optimization method. Compared with traditional methods, our method has minimum wavefront tomographic error, which will be helpful to get higher imaging resolution at large FOV in MCAO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and the State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.
NASA Astrophysics Data System (ADS)
Rongxiao, ZHAI; Mengtong, QIU; Weixi, LUO; Peitian, CONG; Tao, HUANG; Jiahui, YIN; Tianyang, ZHANG
2018-04-01
As one of the most important elements in linear transformer driver (LTD) based systems, the gas pressurized closing switches are required to operate with a very low prefire probability during the DC-charging process to ensure reliable operation and stable output of the whole pulsed power system. The most direct and effective way to control the prefire probability is to select a suitable working coefficient. The study of the development characteristics of the initially generated electrons is useful for optimizing the working coefficient and improving the prefire characteristic of the switches. In this paper an ultraviolet pulsed laser is used to generate initial electrons inside the gap volume. A current measuring system is used to measure the time-dependent current generated by the growth of the initial electrons so as to study the development characteristics of the electrons under different working coefficients. Experimental results show that the development characteristics of the initial electrons are influenced obviously by the working coefficient. With the increase of the working coefficient, the development degree of the electrons increases consequently. At the same times, there is a threshold of working coefficient which produces the effect of ionization on electrons. The range of the threshold has a slow growth but remains close to 65% with the gas pressure increase. When the working coefficient increases further, γ processes are starting to be generated inside the gap volume. In addition, an optimal working coefficient beneficial for improving the prefire characteristic is indicated and further tested.
TH-B-207B-00: Pediatric Image Quality Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
TH-B-207B-01: Optimizing Pediatric CT in the Emergency Department
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodge, C.
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.
Zhu, Zhiwei; To, Suet; Zhang, Shaojian
2015-08-10
Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.
The appropriateness of use of percutaneous transluminal coronary angioplasty in Spain.
Aguilar, M D; Fitch, K; Lázaro, P; Bernstein, S J
2001-05-01
The rapid increase in the number of percutaneous transluminal coronary angioplasty (PTCA) procedures performed in Spain in recent years raises questions about how appropriately this procedure is being used. To examine this issue, we studied the appropriateness of use of PTCA in Spanish patients and factors associated with inappropriate use. We applied criteria for the appropriate use of PTCA developed by an expert panel of Spanish cardiologists and cardiovascular surgeons to a random sample of 1913 patients undergoing PTCA in Spain in 1997. The patients were selected through a two-step sampling process, stratifying by hospital type (public/private) and volume of procedures (low/medium/high). We examined the association between inappropriate use of PTCA and different clinical and sociodemographic factors. Overall, 46% of the PTCA procedures were appropriate, 31% were uncertain and 22% were inappropriate. Two factors contributing to inappropriate use were patients' receipt of less than optimal medical therapy and their failure to undergo stress testing. Institutional type and volume of procedures were not significantly related with inappropriate use. One of every five PTCA procedures in Spain is done for inappropriate reasons. Assuring that patients receive optimal medical therapy and undergo stress testing when indicated could contribute to more appropriate use of PTCA.
Determination of solute descriptors by chromatographic methods.
Poole, Colin F; Atapattu, Sanka N; Poole, Salwa K; Bell, Andrea K
2009-10-12
The solvation parameter model is now well established as a useful tool for obtaining quantitative structure-property relationships for chemical, biomedical and environmental processes. The model correlates a free-energy related property of a system to six free-energy derived descriptors describing molecular properties. These molecular descriptors are defined as L (gas-liquid partition coefficient on hexadecane at 298K), V (McGowan's characteristic volume), E (excess molar refraction), S (dipolarity/polarizability), A (hydrogen-bond acidity), and B (hydrogen-bond basicity). McGowan's characteristic volume is trivially calculated from structure and the excess molar refraction can be calculated for liquids from their refractive index and easily estimated for solids. The remaining four descriptors are derived by experiment using (largely) two-phase partitioning, chromatography, and solubility measurements. In this article, the use of gas chromatography, reversed-phase liquid chromatography, micellar electrokinetic chromatography, and two-phase partitioning for determining solute descriptors is described. A large database of experimental retention factors and partition coefficients is constructed after first applying selection tools to remove unreliable experimental values and an optimized collection of varied compounds with descriptor values suitable for calibrating chromatographic systems is presented. These optimized descriptors are demonstrated to be robust and more suitable than other groups of descriptors characterizing the separation properties of chromatographic systems.
Tuning the Shell Number of Multishelled Metal Oxide Hollow Fibers for Optimized Lithium-Ion Storage.
Sun, Jin; Lv, Chunxiao; Lv, Fan; Chen, Shuai; Li, Daohao; Guo, Ziqi; Han, Wei; Yang, Dongjiang; Guo, Shaojun
2017-06-27
Searching the long-life transition-metal oxide (TMO)-based materials for future lithium-ion batteries (LIBs) is still a great challenge because of the mechanical strain resulting from volume change of TMO anodes during the lithiation/delithiation process. To well address this challenging issue, we demonstrate a controlled method for making the multishelled TMO hollow microfibers with tunable shell numbers to achieve the optimal void for efficient lithium-ion storage. Such a particularly designed void can lead to a short diffusion distance for fast diffusion of Li + ions and also withstand a large volume variation upon cycling, both of which are the key for high-performance LIBs. Triple-shelled TMO hollow microfibers are a quite stable anode material for LIBs with high reversible capacities (NiO: 698.1 mA h g -1 at 1 A g -1 ; Co 3 O 4 : 940.2 mA h g -1 at 1 A g -1 ; Fe 2 O 3 : 997.8 mA h g -1 at 1 A g -1 ), excellent rate capability, and stability. The present work opens a way for rational design of the void of multiple shells in achieving the stable lithium-ion storage through the biomass conversion strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, J; Tian, Z; Gu, X
Purpose: To investigate the dosimetric benefit of adaptive re-planning for lung stereotactic body radiotherapy(SBRT). Methods: Five lung cancer patients with SBRT treatment were retrospectively investigated. Our in-house supercomputing online re-planning environment (SCORE) was used to realize the re-planning process. First a deformable image registration was carried out to transfer contours from treatment planning CT to each treatment CBCT. Then an automatic re-planning using original plan DVH guided fluence-map optimization is performed to get a new plan for the up-to-date patient geometry. We compared the re-optimized plan to the original plan projected on the up-to-date patient geometry in critical dosimetric parameters,more » such as PTV coverage, spinal cord maximum and volumetric constraint dose, esophagus maximum and volumetric constraint dose. Results: The average volume of PTV covered by prescription dose for all patients was improved by 7.56% after the adaptive re-planning. The volume of the spinal cord receiving 14.5Gy and 23Gy (V14.5, V23) decreased by 1.48% and 0.68%, respectively. For the esophagus, the volume receiving 19.5Gy (V19.5) reduced by 1.37%. Meanwhile, the maximum dose dropped off by 2.87% for spinal cord and 4.80% for esophagus. Conclusion: Our experimental results demonstrate that adaptive re-planning for lung SBRT has the potential to minimize the dosimetric effect of inter-fraction deformation and thus improve target coverage while reducing the risk of toxicity to nearby normal tissues.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Jin, Chunlian; Balducci, Patrick J.
2013-12-01
This volume presents the battery storage evaluation tool developed at Pacific Northwest National Laboratory (PNNL), which is used to evaluate benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This volume provide backgroundmore » and manual for this evaluation tool.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-08-01
These proceedings document the presentations given at the Energy Optimization of Water and Wastewater Management for Municipal and Industrial Applications, Conference, sponsored by the Department of Energy (DOE). The conference was organized and coordinated by Argonne National Laboratory. The conference focused on energy use on conservation in water and wastewater. The General Session also reflects DOE's commitment to the support and development of waste and wastewater systems that are environmentally acceptable. The conference proceedings are divided into two volumes. Volume 1 contains the General Session and Sessions 1 to 5. Volume 2 covers Sessions 6 to 12. Separate abstracts aremore » prepared for each item within the scope of the Energy Data Base.« less
2013-01-01
Background Drop drying is a key factor in a wide range of technical applications, including spotted microarrays. The applied nL liquid volume provides specific reaction conditions for the immobilization of probe molecules to a chemically modified surface. Results We investigated the influence of nL and μL liquid drop volumes on the process of probe immobilization and compare the results obtained to the situation in liquid solution. In our data, we observe a strong relationship between drop drying effects on immobilization and surface chemistry. In this work, we present results on the immobilization of dye labeled 20mer oligonucleotides with and without an activating 5′-aminoheptyl linker onto a 2D epoxysilane and a 3D NHS activated hydrogel surface. Conclusions Our experiments identified two basic processes determining immobilization. First, the rate of drop drying that depends on the drop volume and the ambient relative humidity. Oligonucleotides in a dried spot react unspecifically with the surface and long reaction times are needed. 3D hydrogel surfaces allow for immobilization in a liquid environment under diffusive conditions. Here, oligonucleotide immobilization is much faster and a specific reaction with the reactive linker group is observed. Second, the effect of increasing probe concentration as a result of drop drying. On a 3D hydrogel, the increasing concentration of probe molecules in nL spotting volumes accelerates immobilization dramatically. In case of μL volumes, immobilization depends on whether the drop is allowed to dry completely. At non-drying conditions, very limited immobilization is observed due to the low oligonucleotide concentration used in microarray spotting solutions. The results of our study provide a general guideline for microarray assay development. They allow for the initial definition and further optimization of reaction conditions for the immobilization of oligonucleotides and other probe molecule classes to different surfaces in dependence of the applied spotting and reaction volume. PMID:23758982
SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazareth, D; Spaans, J
Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less
Realistic tissue visualization using photoacoustic image
NASA Astrophysics Data System (ADS)
Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong
2018-02-01
Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.
Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J
2009-10-01
To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.
Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew
2013-01-01
Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.
Instrumentation, control, and automation for submerged anaerobic membrane bioreactors.
Robles, Ángel; Durán, Freddy; Ruano, María Victoria; Ribes, Josep; Rosado, Alfredo; Seco, Aurora; Ferrer, José
2015-01-01
A submerged anaerobic membrane bioreactor (AnMBR) demonstration plant with two commercial hollow-fibre ultrafiltration systems (PURON®, Koch Membrane Systems, PUR-PSH31) was designed and operated for urban wastewater treatment. An instrumentation, control, and automation (ICA) system was designed and implemented for proper process performance. Several single-input-single-output (SISO) feedback control loops based on conventional on-off and PID algorithms were implemented to control the following operating variables: flow-rates (influent, permeate, sludge recycling and wasting, and recycled biogas through both reactor and membrane tanks), sludge wasting volume, temperature, transmembrane pressure, and gas sparging. The proposed ICA for AnMBRs for urban wastewater treatment enables the optimization of this new technology to be achieved with a high level of process robustness towards disturbances.
Big Data in the Industry - Overview of Selected Issues
NASA Astrophysics Data System (ADS)
Gierej, Sylwia
2017-12-01
This article reviews selected issues related to the use of Big Data in the industry. The aim is to define the potential scope and forms of using large data sets in manufacturing companies. By systematically reviewing scientific and professional literature, selected issues related to the use of mass data analytics in production were analyzed. A definition of Big Data was presented, detailing its main attributes. The importance of mass data processing technology in the development of Industry 4.0 concept has been highlighted. Subsequently, attention was paid to issues such as production process optimization, decision making and mass production individualisation, and indicated the potential for large volumes of data. As a result, conclusions were drawn regarding the potential of using Big Data in the industry.
Macroscopic balance model for wave rotors
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
1996-01-01
A mathematical model for multi-port wave rotors is described. The wave processes that effect energy exchange within the rotor passage are modeled using one-dimensional gas dynamics. Macroscopic mass and energy balances relate volume-averaged thermodynamic properties in the rotor passage control volume to the mass, momentum, and energy fluxes at the ports. Loss models account for entropy production in boundary layers and in separating flows caused by blade-blockage, incidence, and gradual opening and closing of rotor passages. The mathematical model provides a basis for predicting design-point wave rotor performance, port timing, and machine size. Model predictions are evaluated through comparisons with CFD calculations and three-port wave rotor experimental data. A four-port wave rotor design example is provided to demonstrate model applicability. The modeling approach is amenable to wave rotor optimization studies and rapid assessment of the trade-offs associated with integrating wave rotors into gas turbine engine systems.
Dynamic 3D measurement of modulated radiotherapy: a scintillator-based approach
NASA Astrophysics Data System (ADS)
Archambault, Louis; Rilling, Madison; Roy-Pomerleau, Xavier; Thibault, Simon
2017-05-01
With the rise of high-conformity dynamic radiotherapy, such as volumetric modulated arc therapy and robotic radiosurgery, the temporal dimension of dose measurement is becoming increasingly important. It must be possible to tell both ‘where’ and ‘when’ a discrepancy occurs between the plan and its delivery. A 3D scintillation-based dosimetry system could be ideal for such a thorough, end-to-end verification; however, the challenge lies in retrieving the volumetric information of the light-emitting volume. This paper discusses the motivation, from an optics point of view, of using the images acquired with a plenoptic camera, or light field imager, of an irradiated plastic scintillator volume to reconstruct the delivered 3D dose distribution. Current work focuses on the optimization of the optical design as well as the data processing that is involved in the ongoing development of a clinically viable, second generation dosimetry system.
Determination of Lubricants on Ball Bearings by FT-IR using an Integrating Sphere
NASA Technical Reports Server (NTRS)
Street, K. W.; Pepper, S. V.; Wright, A.
2003-01-01
The lifetime determination of space lubricants is done at our facility by accelerated testing. Several micrograms of lubricant are deposited on the surface of a ball by syringing tens of micro liters of dilute lubricant solution. The solvent evaporates and the mass of lubricant is determined by twenty weighings near the balance reliability limit. This process is timely but does not produce a good correlation between the mass of lubricant and the volume of solution applied, as would be expected. The amount of lubricant deposited on a ball can be determined directly by Fourier Transform - Infrared Spectroscopy using an integrating sphere. In this paper, we discuss reasons for choosing this methodology, optimization of quantification conditions and potential applications for the technique. The volume of lubricant solution applied to the ball gives better correlation to the IR intensity than does the weight.
Microfluidics-based integrated airborne pathogen detection systems
NASA Astrophysics Data System (ADS)
Northrup, M. Allen; Alleman-Sposito, Jennifer; Austin, Todd; Devitt, Amy; Fong, Donna; Lin, Phil; Nakao, Brian; Pourahmadi, Farzad; Vinas, Mary; Yuan, Bob
2006-09-01
Microfluidic Systems is focused on building microfluidic platforms that interface front-end mesofluidics to handle real world sample volumes for optimal sensitivity coupled to microfluidic circuitry to process small liquid volumes for complex reagent metering, mixing, and biochemical analysis, particularly for pathogens. MFSI is the prime contractor on two programs for the US Department of Homeland Security: BAND (Bioagent Autonomous Networked Detector) and IBADS (Instantaneous Bio-Aerosol Detection System). The goal of BAND is to develop an autonomous system for monitoring the air for known biological agents. This consists of air collection, sample lysis, sample purification, detection of DNA, RNA, and toxins, and a networked interface to report the results. For IBADS, MFSI is developing the confirmatory device which must verify the presence of a pathogen with 5 minutes of an air collector/trigger sounding an alarm. Instrument designs and biological assay results from both BAND and IBADS will be presented.
Thermal response of a 4D carbon/carbon composite with volume ablation: a numerical simulation study
NASA Astrophysics Data System (ADS)
Zhang, Bai; Li, Xudong
2018-02-01
As carbon/carbon composites usually work at high temperature environments, material ablation inevitably occurs, which further affects the system stability and safety. In this paper, the thermal response of a thermoprotective four-directional carbon/carbon (4D C/C) composite is studied herein using a numerical model focusing on volume ablation. The model is based on energy- and mass-conservation principles as well as on the thermal decomposition equation of solid materials. The thermophysical properties of the C/C composite during the ablation process are calculated, and the thermal response during ablation, including temperature distribution, density, decomposition rate, char layer thickness, and mass loss, are quantitatively predicted. The present numerical study provides a fundamental understanding of the ablative mechanisms of a 4D C/C composite, serving as a reference and basis for further designs and optimizations of thermoprotective materials.
Jiang, Songhui; Templeton, Michael R.; He, Gengsheng; Qu, Weidong
2013-01-01
An optimized method is presented using liquid-liquid extraction and derivatization for the extraction of iodoacetic acid (IAA) and other haloacetic acids (HAA9) and direct extraction of iodoform (IF) and other trihalomethanes (THM4) from drinking water, followed by detection by gas chromatography with electron capture detection (GC-ECD). A Doehlert experimental design was performed to determine the optimum conditions for the five most significant factors in the derivatization step: namely, the volume and concentration of acidic methanol (optimized values = 15%, 1 mL), the volume and concentration of Na2SO4 solution (129 g/L, 8.5 mL), and the volume of saturated NaHCO3 solution (1 mL). Also, derivatization time and temperature were optimized by a two-variable Doehlert design, resulting in the following optimized parameters: an extraction time of 11 minutes for IF and THM4 and 14 minutes for IAA and HAA9; mass of anhydrous Na2SO4 of 4 g for IF and THM4 and 16 g for IAA and HAA9; derivatization time of 160 min and temperature at 40°C. Under optimal conditions, the optimized procedure achieves excellent linearity (R2 ranges 0.9990–0.9998), low detection limits (0.0008–0.2 µg/L), low quantification limits (0.008–0.4 µg/L), and good recovery (86.6%–106.3%). Intra- and inter-day precision were less than 8.9% and 8.8%, respectively. The method was validated by applying it to the analysis of raw, flocculated, settled, and finished waters collected from a water treatment plant in China. PMID:23613747
Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.
Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei
2015-01-01
Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.
Wrapping with a splash: High-speed encapsulation with ultrathin sheets
NASA Astrophysics Data System (ADS)
Kumar, Deepak; Paulsen, Joseph D.; Russell, Thomas P.; Menon, Narayanan
2018-02-01
Many complex fluids rely on surfactants to contain, protect, or isolate liquid drops in an immiscible continuous phase. Thin elastic sheets can wrap liquid drops in a spontaneous process driven by capillary forces. For encapsulation by sheets to be practically viable, a rapid, continuous, and scalable process is essential. We exploit the fast dynamics of droplet impact to achieve wrapping of oil droplets by ultrathin polymer films in a water phase. Despite the violence of splashing events, the process robustly yields wrappings that are optimally shaped to maximize the enclosed fluid volume and have near-perfect seams. We achieve wrappings of targeted three-dimensional (3D) shapes by tailoring the 2D boundary of the films and show the generality of the technique by producing both oil-in-water and water-in-oil wrappings.
Infrared readout electronics; Proceedings of the Meeting, Orlando, FL, Apr. 21, 22, 1992
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Editor)
1992-01-01
The present volume on IR readout electronics discusses cryogenic readout using silicon devices, cryogenic readout using III-V and LTS devices, multiplexers for higher temperatures, and focal-plane signal processing electronics. Attention is given to the optimization of cryogenic CMOS processes for sub-10-K applications, cryogenic measurements of aerojet GaAs n-JFETs, inP-based heterostructure device technology for ultracold readout applications, and a three-terminal semiconductor-superconductor transimpedance amplifier. Topics addressed include unfulfilled needs in IR astronomy focal-plane readout electronics, IR readout integrated circuit technology for tactical missile systems, and radiation-hardened 10-bit A/D for FPA signal processing. Also discussed are the implementation of a noise reduction circuit for spaceflight IR spectrometers, a real-time processor for staring receivers, and a fiber-optic link design for INMOS transputers.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 3: Programmer's manual
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. Here, information is given on the IPOST code.
Buitrón, G; Moreno-Andrade, I; Linares-García, J A; Pérez, J; Betancur, M J; Moreno, J A
2007-01-01
This work presents the results and discussions of the application of an optimally controlled influent flow rate strategy to biodegrade, in a discontinuous reactor, a synthetic wastewater constituted by 4-chlorophenol. An aerobic automated discontinuous reactor system of 1.3 m3, with a useful volume of 0.75 m3 and an exchange volume of 60% was used. As part of the control strategy influent is fed into the reactor in such a way as to obtain the maximal degradation rate avoiding inhibition of microorganisms. Such an optimal strategy was able to manage increments of 4-chlorophenol concentrations in the influent between 250 and 1000 mg/L. it was shown that the optimally controlled influent flow rate strategy brings savings in reaction time and flexibility in treating high concentrations of an influent with toxic characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, J.
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournel, B.; Barre, Y.; Lepeytre, C.
2012-07-01
Liquid wastes decontamination processes are mainly based on two techniques: Bulk processes and the so called Cartridges processes. The first technique has been developed for the French nuclear fuel reprocessing industry since the 60's in Marcoule and La Hague. It is a proven and mature technology which has been successfully and quickly implemented by AREVA at Fukushima site for the processing of contaminated waters. The second technique, involving cartridges processes, offers new opportunities for the use of innovative adsorbents. The AREVA process developed for Fukushima and some results obtained on site will be presented as well as laboratory scale resultsmore » obtained in CEA laboratories. Examples of new adsorbents development for liquid wastes decontamination are also given. A chemical process unit based on co-precipitation technique has been successfully and quickly implemented by AREVA at Fukushima site for the processing of contaminated waters. The asset of this technique is its ability to process large volumes in a continuous mode. Several chemical products can be used to address specific radioelements such as: Cs, Sr, Ru. Its drawback is the production of sludge (about 1% in volume of initial liquid volume). CEA developed strategies to model the co-precipitation phenomena in order to firstly minimize the quantity of added chemical reactants and secondly, minimize the size of co-precipitation units. We are on the way to design compact units that could be mobilized very quickly and efficiently in case of an accidental situation. Addressing the problem of sludge conditioning, cementation appears to be a very attractive solution. Fukushima accident has focused attention on optimizations that should be taken into account in future studies: - To better take account for non-typical aqueous matrixes like seawater; - To enlarge the spectrum of radioelements that can be efficiently processed and especially short lives radioelements that are usually less present in standard effluents resulting from nuclear activities; - To develop reversible solid adsorbents for cartridge-type applications in order to minimize wastes. (authors)« less
Sarro, Karine J.; Silvatti, Amanda P.; Barros, Ricardo M. L.
2008-01-01
This work aimed to verify if swimmers present better chest wall coordination during breathing than healthy non-athletes analyzing the correlation between ribs motion and the variation of thoracoabdominal volumes. The results of two up-to-date methods based on videogrammetry were correlated in this study. The first one measured the volumes of 4 separate compartments of the chest wall (superior thorax, inferior thorax, superior abdomen and inferior abdomen) as a function of time. The second calculated the rotation angle of the 2nd to the 10th ribs around the quasi-transversal axis also in function of time. The chest wall was represented by 53 markers, attached to the ribs, vertebrae, thorax and abdomen of 15 male swimmers and of 15 non- athletes. A kinematical analysis system equipped with 6 digital video cameras (60Hz) was used to obtain the 3D coordinates of the markers. Correlating the curves of ribs rotation angles with the curves of the separate volumes, swimmers presented higher values than non-athletes when the superior and inferior abdomen were considered and the highest correlation values were found in swimmers for the inferior thorax. These results suggest a better coordination between ribs motion and thoracoabdominal volumes in swimmers, indicating the prevalent and coordinated action of the diaphragm and abdominal muscles to inflate and deflate the chest wall. The results further suggest that swimming practice leads to the formation of an optimized breathing pattern and can partially explain the higher lung volumes found in these athletes reported in literature. Key pointsThe study revealed that swimmers present higher correlation between the ribs motion and the variation of abdominal volumes than non-swimmers, suggesting that swimming practice might lead to the formation of an optimized breathing pattern, increasing the coordination between the thoracoabdominal volumes and the ribs motion.No previous work was found in the literature reporting this optimized breathing pattern in swimmers.The higher coordination between the thoracoabdominal volumes and the ribs motion found in swimmers can partially explain the higher lung volumes reported in literature for these athletes. PMID:24149449
Funke, Stefanie; Matilainen, Julia; Nalenz, Heiko; Bechtold-Peters, Karoline; Mahler, Hanns-Christian; Friess, Wolfgang
2016-07-01
Biopharmaceutical products are increasingly commercialized as drug/device combinations to enable self-administration. Siliconization of the inner syringe/cartridge glass barrel for adequate functionality is either performed at the supplier or drug product manufacturing site. Yet, siliconization processes are often insufficiently investigated. In this study, an optimized bake-on siliconization process for cartridges using a pilot-scale siliconization unit was developed. The following process parameters were investigated: spray quantity, nozzle position, spray pressure, time for pump dosing and the silicone emulsion concentration. A spray quantity of 4mg emulsion showed best, immediate atomization into a fine spray. 16 and 29mg of emulsion, hence 4-7-times the spray volume, first generated an emulsion jet before atomization was achieved. Poor atomization of higher quantities correlated with an increased spray loss and inhomogeneous silicone distribution, e.g., due to runlets forming build-ups at the cartridge lower edge and depositing on the star wheel. A prolonged time for pump dosing of 175ms led to a more intensive, long-lasting spray compared to 60ms as anticipated from a higher air-to-liquid ratio. A higher spray pressure of 2.5bar did not improve atomization but led to an increased spray loss. At a 20mm nozzle-to-flange distance the spray cone exactly reached the cartridge flange, which was optimal for thicker silicone layers at the flange to ease piston break-loose. Initially, 10μg silicone was sufficient for adequate extrusion in filled cartridges. However, both maximum break-loose and gliding forces in filled cartridges gradually increased from 5-8N to 21-22N upon 80weeks storage at room temperature. The increase for a 30μg silicone level from 3-6N to 10-12N was moderate. Overall, the study provides a comprehensive insight into critical process parameters during the initial spray-on process and the impact of these parameters on the characteristics of the silicone layer, also in context of long-term product storage. The presented experimental toolbox may be utilized for development or evaluation of siliconization processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Renal cortex segmentation using optimal surface search with novel graph construction.
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2011-01-01
In this paper, we propose a novel approach to solve the renal cortex segmentation problem, which has rarely been studied. In this study, the renal cortex segmentation problem is handled as a multiple-surfaces extraction problem, which is solved using the optimal surface search method. We propose a novel graph construction scheme in the optimal surface search to better accommodate multiple surfaces. Different surface sub-graphs are constructed according to their properties, and inter-surface relationships are also modeled in the graph. The proposed method was tested on 17 clinical CT datasets. The true positive volume fraction (TPVF) and false positive volume fraction (FPVF) are 74.10% and 0.08%, respectively. The experimental results demonstrate the effectiveness of the proposed method.
Equivalent Air Spring Suspension Model for Quarter-Passive Model of Passenger Vehicles
Abid, Haider J.; Chen, Jie; Nassar, Ameen A.
2015-01-01
This paper investigates the GENSIS air spring suspension system equivalence to a passive suspension system. The SIMULINK simulation together with the OptiY optimization is used to obtain the air spring suspension model equivalent to passive suspension system, where the car body response difference from both systems with the same road profile inputs is used as the objective function for optimization (OptiY program). The parameters of air spring system such as initial pressure, volume of bag, length of surge pipe, diameter of surge pipe, and volume of reservoir are obtained from optimization. The simulation results show that the air spring suspension equivalent system can produce responses very close to the passive suspension system. PMID:27351020
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul
1998-01-01
This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.
ISRU System Model Tool: From Excavation to Oxygen Production
NASA Technical Reports Server (NTRS)
Santiago-Maldonado, Edgardo; Linne, Diane L.
2007-01-01
In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.
Building high-performance system for processing a daily large volume of Chinese satellites imagery
NASA Astrophysics Data System (ADS)
Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin
2014-10-01
The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application workflows, is identified to improve the system in the coming years.
Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar
2017-09-01
A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Intracerebral Cell Implantation: Preparation and Characterization of Cell Suspensions.
Rossetti, Tiziana; Nicholls, Francesca; Modo, Michel
2016-01-01
Intracerebral cell transplantation is increasingly finding a clinical translation. However, the number of cells surviving after implantation is low (5-10%) compared to the number of cells injected. Although significant efforts have been made with regard to the investigation of apoptosis of cells after implantation, very little optimization of cell preparation and administration has been undertaken. Moreover, there is a general neglect of the biophysical aspects of cell injection. Cell transplantation can only be an efficient therapeutic approach if an optimal transfer of cells from the dish to the brain can be ensured. We therefore focused on the in vitro aspects of cell preparation of a clinical-grade human neural stem cell (NSC) line for intracerebral cell implantation. NSCs were suspended in five different vehicles: phosphate-buffered saline (PBS), Dulbecco's modified Eagle medium (DMEM), artificial cerebral spinal fluid (aCSF), HypoThermosol, and Pluronic. Suspension accuracy, consistency, and cell settling were determined for different cell volume fractions in addition to cell viability, cell membrane damage, and clumping. Maintenance of cells in suspension was evaluated while being stored for 8 h on ice, at room temperature, or physiological normothermia. Significant differences between suspension vehicles and cellular volume fractions were evident. HypoThermosol and Pluronic performed best, with PBS, aCSF, and DMEM exhibiting less consistency, especially in maintaining a suspension and preserving viability under different storage conditions. These results provide the basis to further investigate these preparation parameters during the intracerebral delivery of NSCs to provide an optimized delivery process that can ensure an efficient clinical translation.
Tong, Yubing; Udupa, Jayaram K.; Torigian, Drew A.
2014-01-01
Purpose: The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? Methods: The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. Results: Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. Conclusions: The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous. PMID:24877839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Yubing; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Torigian, Drew A.
Purpose: The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How doesmore » one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? Methods: The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. Results: Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. Conclusions: The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous.« less