Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Liposomal Bupivacaine Injection Technique in Total Knee Arthroplasty.
Meneghini, R Michael; Bagsby, Deren; Ireland, Philip H; Ziemba-Davis, Mary; Lovro, Luke R
2017-01-01
Liposomal bupivacaine has gained popularity for pain control after total knee arthroplasty (TKA), yet its true efficacy remains unproven. We compared the efficacy of two different periarticular injection (PAI) techniques for liposomal bupivacaine with a conventional PAI control group. This retrospective cohort study compared consecutive patients undergoing TKA with a manufacturer-recommended, optimized injection technique for liposomal bupivacaine, a traditional injection technique for liposomal bupivacaine, and a conventional PAI of ropivacaine, morphine, and epinephrine. The optimized technique utilized a smaller gauge needle and more injection sites. Self-reported pain scores, rescue opioids, and side effects were compared. There were 41 patients in the liposomal bupivacaine optimized injection group, 60 in the liposomal bupivacaine traditional injection group, and 184 in the conventional PAI control group. PAI liposomal bupivacaine delivered via manufacturer-recommended technique offered no benefit over PAI ropivacaine, morphine, and epinephrine. Mean pain scores and the proportions reporting no or mild pain, time to first opioid, and amount of opioids consumed were not better with PAI liposomal bupivacaine compared with PAI ropivacaine, morphine, and epinephrine. The use of the manufacturer-recommended technique for PAI of liposomal bupivacaine does not offer benefit over a conventional, less expensive PAI during TKA. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin
Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
Continuous Optimization on Constraint Manifolds
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1988-01-01
This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.
Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours
NASA Astrophysics Data System (ADS)
Persico, Giacomo
2017-03-01
This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.
Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C
2012-08-01
To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.
Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396
Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M; Ramaseshan, R
2016-06-15
Purpose: In this project, we compared the conventional tangent pair technique to IMRT technique by analyzing the dose distribution. We also investigated the effect of respiration on planning target volume (PTV) dose coverage in both techniques. Methods: In order to implement IMRT technique a template based planning protocol, dose constrains and treatment process was developed. Two open fields with optimized field weights were combined with two beamlet optimization fields in IMRT plans. We compared the dose distribution between standard tangential pair and IMRT. The improvement in dose distribution was measured by parameters such as conformity index, homogeneity index and coveragemore » index. Another end point was the IMRT technique will reduce the planning time for staff. The effect of patient’s respiration on dose distribution was also estimated. The four dimensional computed tomography (4DCT) for different phase of breathing cycle was used to evaluate the effect of respiration on IMRT planned dose distribution. Results: We have accumulated 10 patients that acquired 4DCT and planned by both techniques. Based on the preliminary analysis, the dose distribution in IMRT technique was better than conventional tangent pair technique. Furthermore, the effect of respiration in IMRT plan was not significant as evident from the 95% isodose line coverage of PTV drawn on all phases of 4DCT. Conclusion: Based on the 4DCT images, the breathing effect on dose distribution was smaller than what we expected. We suspect that there are two reasons. First, the PTV movement due to respiration was not significant. It might be because we used a tilted breast board to setup patients. Second, the open fields with optimized field weights in IMRT technique might reduce the breathing effect on dose distribution. A further investigation is necessary.« less
SNR Improvement of QEPAS System by Preamplifier Circuit Optimization and Frequency Locked Technique
NASA Astrophysics Data System (ADS)
Zhang, Qinduan; Chang, Jun; Wang, Zongliang; Wang, Fupeng; Jiang, Fengting; Wang, Mengyao
2018-06-01
Preamplifier circuit noise is of great importance in quartz enhanced photoacoustic spectroscopy (QEPAS) system. In this paper, several noise sources are evaluated and discussed in detail. Based on the noise characteristics, the corresponding noise reduction method is proposed. In addition, a frequency locked technique is introduced to further optimize the QEPAS system noise and improve signal, which achieves a better performance than the conventional frequency scan method. As a result, the signal-to-noise ratio (SNR) could be increased 14 times by utilizing frequency locked technique and numerical averaging technique in the QEPAS system for water vapor detection.
Added Value of Assessing Adnexal Masses with Advanced MRI Techniques
Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.
2015-01-01
This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Ganesh, Sri; Brar, Sheetal
2017-08-01
To describe a "no dissection" technique of lenticule removal in small incision lenticule extraction (SMILE). After docking and laser delivery, a microforceps is used to grasp and gently peel off the lenticule from the underlying stromal bed, without performing any dissection of the upper and lower planes of the lenticule. Prerequisites are a surgeon experienced in the conventional SMILE technique, optimized laser energy settings, and a minimum peripheral lenticule thickness of 25 to 30 µm. The interface as assessed in postoperative dilated clinical photographs was seen to be clearer with less roughness compared to the conventional dissection technique. This may potentially result in better first postoperative visual acuity and quality of vision due to less corneal tissue trauma and minimal tissue handling, thus potentially resulting in faster visual recovery. No dissection lenticule removal is a feasible and reproducible technique that may result in better immediate visual quality compared to the conventional SMILE technique. [J Refract Surg. 2017;33(8):563-566.]. Copyright 2017, SLACK Incorporated.
A case study on topology optimized design for additive manufacturing
NASA Astrophysics Data System (ADS)
Gebisa, A. W.; Lemu, H. G.
2017-12-01
Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.
Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments
NASA Technical Reports Server (NTRS)
Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.
1975-01-01
Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.
Topology-optimized metasurfaces: impact of initial geometric layout.
Yang, Jianji; Fan, Jonathan A
2017-08-15
Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot
NASA Astrophysics Data System (ADS)
Hwang, Donghyeok; Tahk, Min-Jea
2018-04-01
The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.
Nickl, Stefanie; Nedomansky, Jakob; Radtke, Christine; Haslik, Werner; Schroegendorfer, Klaus F
2018-01-31
The transverse myocutaneous gracilis (TMG) flap is a widely used alternative to abdominal flaps in autologous breast reconstruction. However, secondary procedures for aesthetic refinement are frequently necessary. Herein, we present our experience with an optimized approach in TMG breast reconstruction to enhance aesthetic outcome and to reduce the need for secondary refinements. We retrospectively analyzed 37 immediate or delayed reconstructions with TMG flaps in 34 women, performed between 2009 and 2015. Four patients (5 flaps) constituted the conventional group (non-optimized approach). Thirty patients (32 flaps; modified group) underwent an optimized procedure consisting of modified flap harvesting and shaping techniques and methods utilized to reduce denting after rib resection and to diminish donor site morbidity. Statistically significant fewer secondary procedures (0.6 ± 0.9 versus 4.8 ± 2.2; P < .001) and fewer trips to the OR (0.4 ± 0.7 versus 2.3 ± 1.0 times; P = .001) for aesthetic refinement were needed in the modified group as compared to the conventional group. In the modified group, 4 patients (13.3%) required refinement of the reconstructed breast, 7 patients (23.3%) underwent mastopexy/mammoplasty or lipofilling of the contralateral breast, and 4 patients (13.3%) required refinement of the contralateral thigh. Total flap loss did not occur in any patient. Revision surgery was needed once. Compared to the conventional group, enhanced aesthetic results with consecutive reduction of secondary refinements could be achieved when using our modified flap harvesting and shaping techniques, as well as our methods for reducing contour deformities after rib resection and for overcoming donor site morbidities. © 2017 Wiley Periodicals, Inc.
Optimization of ultrahigh-speed multiplex PCR for forensic analysis.
Gibson-Daw, Georgiana; Crenshaw, Karin; McCord, Bruce
2018-01-01
In this paper, we demonstrate the design and optimization of an ultrafast PCR amplification technique, used with a seven-locus multiplex that is compatible with conventional capillary electrophoresis systems as well as newer microfluidic chip devices. The procedure involves the use of a high-speed polymerase and a rapid cycling protocol to permit multiplex PCR amplification of forensic short tandem repeat loci in 6.5 min. We describe the selection and optimization of master mix reagents such as enzyme, buffer, MgCl 2 , and dNTPs, as well as primer ratios, total volume, and cycle conditions, in order to get the best profile in the shortest time possible. Sensitivity and reproducibility studies are also described. The amplification process utilizes a small high-speed thermocycler and compact laptop, making it portable and potentially useful for rapid, inexpensive on-site genotyping. The seven loci of the multiplex were taken from conventional STR genotyping kits and selected for their size and lack of overlap. Analysis was performed using conventional capillary electrophoresis and microfluidics with fluorescent detection. Overall, this technique provides a more rapid method for rapid sample screening of suspects and victims. Graphical abstract Rapid amplification of forensic DNA using high speed thermal cycling followed by capillary or microfluidic electrophoresis.
Trellis coding techniques for mobile communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Jedrey, T.
1988-01-01
A criterion for designing optimum trellis codes to be used over fading channels is given. A technique is shown for reducing certain multiple trellis codes, optimally designed for the fading channel, to conventional (i.e., multiplicity one) trellis codes. The computational cutoff rate R0 is evaluated for MPSK transmitted over fading channels. Examples of trellis codes optimally designed for the Rayleigh fading channel are given and compared with respect to R0. Two types of modulation/demodulation techniques are considered, namely coherent (using pilot tone-aided carrier recovery) and differentially coherent with Doppler frequency correction. Simulation results are given for end-to-end performance of two trellis-coded systems.
Optimization of a multi-well array SERS chip
NASA Astrophysics Data System (ADS)
Abell, J. L.; Driskell, J. D.; Dluhy, R. A.; Tripp, R. A.; Zhao, Y.-P.
2009-05-01
SERS-active substrates are fabricated by oblique angle deposition and patterned by a polymer-molding technique to provide a uniform array for high throughput biosensing and multiplexing. Using a conventional SERS-active molecule, 1,2-Bis(4-pyridyl)ethylene (BPE), we show that this device provides a uniform Raman signal enhancement from well to well. The patterning technique employed in this study demonstrates a flexibility allowing for patterning control and customization, and performance optimization of the substrate. Avian influenza is analyzed to demonstrate the ability of this multi-well patterned SERS substrate for biosensing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guida, K; Qamar, K; Thompson, M
Purpose: The RTOG 1005 trial offered a hypofractionated arm in delivering WBRT+SIB. Traditionally, treatments were planned at our institution using field-in-field (FiF) tangents with a concurrent 3D conformal boost. With the availability of VMAT, it is possible that a hybrid VMAT-3D planning technique could provide another avenue in treating WBRT+SIB. Methods: A retrospective study of nine patients previously treated using RTOG 1005 guidelines was performed to compare FiF+3D plans with the hybrid technique. A combination of static tangents and partial VMAT arcs were used in base-dose optimization. The hybrid plans were optimized to deliver 4005cGy to the breast PTVeval andmore » 4800cGy to the lumpectomy PTVeval over 15 fractions. Plans were optimized to meet the planning goals dictated by RTOG 1005. Results: Hybrid plans yielded similar coverage of breast and lumpectomy PTVs (average D95 of 4013cGy compared to 3990cGy for conventional), while reducing the volume of high dose within the breast; the average D30 and D50 for the hybrid technique were 4517cGy and 4288cGy, compared to 4704cGy and 4377cGy for conventional planning. Hybrid plans increased conformity as well, yielding CI95% values of 1.22 and 1.54 for breast and lumpectomy PTVeval volumes; in contrast, conventional plans averaged 1.49 and 2.27, respectively. The nearby organs at risk (OARs) received more low dose with the hybrid plans due to low dose spray from the partial arcs, but all hybrid plans did meet the acceptable constraints, at a minimum, from the protocol. Treatment planning time was also reduced, as plans were inversely optimized (VMAT) rather than forward optimized. Conclusion: Hybrid-VMAT could be a solution in delivering WB+SIB, as plans yield very conformal treatment plans and maintain clinical standards in OAR sparing. For treating breast cancer patients with a simultaneously-integrated boost, Hybrid-VMAT offers superiority in dosimetric conformity and planning time as compared to FIF techniques.« less
Latha, Selvanathan; Sivaranjani, Govindhan; Dhanasekaran, Dharumadurai
2017-09-01
Among diverse actinobacteria, Streptomyces is a renowned ongoing source for the production of a large number of secondary metabolites, furnishing immeasurable pharmacological and biological activities. Hence, to meet the demand of new lead compounds for human and animal use, research is constantly targeting the bioprospecting of Streptomyces. Optimization of media components and physicochemical parameters is a plausible approach for the exploration of intensified production of novel as well as existing bioactive metabolites from various microbes, which is usually achieved by a range of classical techniques including one factor at a time (OFAT). However, the major drawbacks of conventional optimization methods have directed the use of statistical optimization approaches in fermentation process development. Response surface methodology (RSM) is one of the empirical techniques extensively used for modeling, optimization and analysis of fermentation processes. To date, several researchers have implemented RSM in different bioprocess optimization accountable for the production of assorted natural substances from Streptomyces in which the results are very promising. This review summarizes some of the recent RSM adopted studies for the enhanced production of antibiotics, enzymes and probiotics using Streptomyces with the intention to highlight the significance of Streptomyces as well as RSM to the research community and industries.
Montaser, A.; Huse, G.R.; Wax, R.A.; Chan, S.-K.; Golightly, D.W.; Kane, J.S.; Dorrzapf, A.F.
1984-01-01
An inductively coupled Ar plasma (ICP), generated in a lowflow torch, was investigated by the simplex optimization technique for simultaneous, multielement, atomic emission spectrometry (AES). The variables studied included forward power, observation height, gas flow (outer, intermediate, and nebulizer carrier) and sample uptake rate. When the ICP was operated at 720-W forward power with a total gas flow of 5 L/min, the signal-to-background ratios (S/B) of spectral lines from 20 elements were either comparable or inferior, by a factor ranging from 1.5 to 2, to the results obtained from a conventional Ar ICP. Matrix effect studies on the Ca-PO4 system revealed that the plasma generated in the low-flow torch was as free of vaporizatton-atomizatton interferences as the conventional ICP, but easily ionizable elements produced a greater level of suppression or enhancement effects which could be reduced at higher forward powers. Electron number densities, as determined via the series until line merging technique, were tower ht the plasma sustained in the low-flow torch as compared with the conventional ICP. ?? 1984 American Chemical Society.
Meleşcanu Imre, M; Preoteasa, E; Țâncu, AM; Preoteasa, CT
2013-01-01
Rationale. The imaging methods are more and more used in the clinical process of modern dentistry. Once the implant based treatment alternatives are nowadays seen as being the standard of care in edentulous patients, these techniques must be integrated in the complete denture treatment. Aim. The study presents some evaluation techniques for the edentulous patient treated by conventional dentures or mini dental implants (mini SKY Bredent) overdentures, using the profile teleradiography. These offer data useful for an optimal positioning of the artificial teeth and the mini dental implants, favoring to obtain an esthetic and functional treatment outcome. We proposed also a method to conceive a simple surgical guide that allows the prosthetically driven implants placement. Material and method. Clinical case reports were made, highlighting the importance of cephalometric evaluation on lateral teleradiographs in complete edentulous patients. A clinical case that gradually reports the surgical guide preparation (Bredent silicon radio opaque), in order to place the mini dental implants in the best prosthetic and anatomic conditions, was presented. Conclusions. The profile teleradiograph is a useful tool for the practitioner. It allows establishing the optimal site for implant placement, in a good relation with the overdenture. The conventional denture can be easily and relatively costless transformed in a surgical guide used during implant placement. PMID:23599828
Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.
Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank
2017-12-01
Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.
Biodiesel production from low cost and renewable feedstock
NASA Astrophysics Data System (ADS)
Gude, Veera G.; Grant, Georgene E.; Patil, Prafulla D.; Deng, Shuguang
2013-12-01
Sustainable biodiesel production should: a) utilize low cost renewable feedstock; b) utilize energy-efficient, nonconventional heating and mixing techniques; c) increase net energy benefit of the process; and d) utilize renewable feedstock/energy sources where possible. In this paper, we discuss the merits of biodiesel production following these criteria supported by the experimental results obtained from the process optimization studies. Waste cooking oil, non-edible (low-cost) oils (Jatropha curcas and Camelina Sativa) and algae were used as feedstock for biodiesel process optimization. A comparison between conventional and non-conventional methods such as microwaves and ultrasound was reported. Finally, net energy scenarios for different biodiesel feedstock options and algae are presented.
NASA Astrophysics Data System (ADS)
Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan
2018-05-01
Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.
Intelligent control for PMSM based on online PSO considering parameters change
NASA Astrophysics Data System (ADS)
Song, Zhengqiang; Yang, Huiling
2018-03-01
A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.
Additive manufacturing: Toward holistic design
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.; ...
2017-03-18
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
Technique Developed for Optimizing Traveling-Wave Tubes
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
1999-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Application of optimization techniques to vehicle design: A review
NASA Technical Reports Server (NTRS)
Prasad, B.; Magee, C. L.
1984-01-01
The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.
Quantum optimization for training support vector machines.
Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo
2003-01-01
Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morishita, Hiroyuki, E-mail: hmorif@koto.kpu-m.ac.jp, E-mail: mori-h33@xa2.so-net.ne.jp; Takeuchi, Yoshito, E-mail: yotake62@qg8.so-net.ne.jp; Ito, Takaaki, E-mail: takaaki@koto.kpu-m.ac.jp
2016-06-15
PurposeThe purpose of the study was to retrospectively evaluate the efficacy and safety of the balloon blocking technique (BBT).Materials and MethodsThe BBT was performed in six patients (all males, mean 73.5 years) in whom superselective catheterization for transcatheter arterial embolization by the conventional microcatheter techniques had failed due to anatomical difficulty, including targeted arteries originating steeply or hooked from parent arteries. All BBT procedures were performed using Seldinger’s transfemoral method. Occlusive balloons were deployed and inflated at the distal side of the target artery branching site in the parent artery via transfemoral access. A microcatheter was delivered from a 5-F cathetermore » via another femoral access and was advanced over the microguidewire into the target artery, under balloon blockage of advancement of the microguidewire into non-target branches. After the balloon catheter was deflated and withdrawn, optimal interventions were performed through the microcatheter.ResultsAfter success of accessing the targeted artery by BBT, optimal interventions were accomplished in all patients with no complications other than vasovagal hypotension, which responded to nominal therapy.ConclusionThe BBT may be useful in superselective catheterization of inaccessible arteries due to anatomical difficulties.« less
Secure positioning technique based on the encrypted visible light map
NASA Astrophysics Data System (ADS)
Lee, Y. U.; Jung, G.
2017-01-01
For overcoming the performance degradation problems of the conventional visible light (VL) positioning system, which are due to the co-channel interference by adjacent light and the irregularity of the VL reception position in the three dimensional (3-D) VL channel, the secure positioning technique based on the two dimensional (2-D) encrypted VL map is proposed, implemented as the prototype for the specific embedded positioning system, and verified by performance tests in this paper. It is shown from the test results that the proposed technique achieves the performance enhancement over 21.7% value better than the conventional one in the real positioning environment, and the well known PN code is the optimal stream encryption key for the good VL positioning.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1973-01-01
The conventional six-engine reaction control jet relay attitude control law with deadband is shown to be a good linear approximation to a weighted time-fuel optimal control law. Techniques for evaluating the value of the relative weighting between time and fuel for a particular relay control law is studied along with techniques to interrelate other parameters for the two control laws. Vehicle attitude control laws employing control moment gyros are then investigated. Steering laws obtained from the expression for the reaction torque of the gyro configuration are compared to a total optimal attitude control law that is derived from optimal linear regulator theory. This total optimal attitude control law has computational disadvantages in the solving of the matrix Riccati equation. Several computational algorithms for solving the matrix Riccati equation are investigated with respect to accuracy, computational storage requirements, and computational speed.
Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.
Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat
2013-05-21
Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.
A variable-gain output feedback control design approach
NASA Technical Reports Server (NTRS)
Haylo, Nesim
1989-01-01
A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.
Optimal Design of MPPT Controllers for Grid Connected Photovoltaic Array System
NASA Astrophysics Data System (ADS)
Ebrahim, M. A.; AbdelHadi, H. A.; Mahmoud, H. M.; Saied, E. M.; Salama, M. M.
2016-10-01
Integrating photovoltaic (PV) plants into electric power system exhibits challenges to power system dynamic performance. These challenges stem primarily from the natural characteristics of PV plants, which differ in some respects from the conventional plants. The most significant challenge is how to extract and regulate the maximum power from the sun. This paper presents the optimal design for the most commonly used Maximum Power Point Tracking (MPPT) techniques based on Proportional Integral tuned by Particle Swarm Optimization (PI-PSO). These suggested techniques are, (1) the incremental conductance, (2) perturb and observe, (3) fractional short circuit current and (4) fractional open circuit voltage techniques. This research work provides a comprehensive comparative study with the energy availability ratio from photovoltaic panels. The simulation results proved that the proposed controllers have an impressive tracking response. The system dynamic performance improved greatly using the proposed controllers.
NASA Astrophysics Data System (ADS)
Venkata, Santhosh Krishnan; Roy, Binoy Krishna
2016-03-01
Design of an intelligent flow measurement technique using venturi flow meter is reported in this paper. The objectives of the present work are: (1) to extend the linearity range of measurement to 100 % of full scale input range, (2) to make the measurement technique adaptive to variations in discharge coefficient, diameter ratio of venturi nozzle and pipe (β), liquid density, and liquid temperature, and (3) to achieve the objectives (1) and (2) using an optimized neural network. The output of venturi flow meter is differential pressure. It is converted to voltage by using a suitable data conversion unit. A suitable optimized artificial neural network (ANN) is added, in place of conventional calibration circuit. ANN is trained, tested with simulated data considering variations in discharge coefficient, diameter ratio between venturi nozzle and pipe, liquid density, and liquid temperature. The proposed technique is then subjected to practical data for validation. Results show that the proposed technique has fulfilled the objectives.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Raman Hyperspectral Imaging for Detection of Watermelon Seeds Infected with Acidovorax citrulli.
Lee, Hoonsoo; Kim, Moon S; Qin, Jianwei; Park, Eunsoo; Song, Yu-Rim; Oh, Chang-Sik; Cho, Byoung-Kwan
2017-09-23
The bacterial infection of seeds is one of the most important quality factors affecting yield. Conventional detection methods for bacteria-infected seeds, such as biological, serological, and molecular tests, are not feasible since they require expensive equipment, and furthermore, the testing processes are also time-consuming. In this study, we use the Raman hyperspectral imaging technique to distinguish bacteria-infected seeds from healthy seeds as a rapid, accurate, and non-destructive detection tool. We utilize Raman hyperspectral imaging data in the spectral range of 400-1800 cm -1 to determine the optimal band-ratio for the discrimination of watermelon seeds infected by the bacteria Acidovorax citrulli using ANOVA. Two bands at 1076.8 cm -1 and 437 cm -1 are selected as the optimal Raman peaks for the detection of bacteria-infected seeds. The results demonstrate that the Raman hyperspectral imaging technique has a good potential for the detection of bacteria-infected watermelon seeds and that it could form a suitable alternative to conventional methods.
Mendenhall, Jeffrey; Meiler, Jens
2016-02-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.
Mendenhall, Jeffrey; Meiler, Jens
2016-01-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599
Raman Hyperspectral Imaging for Detection of Watermelon Seeds Infected with Acidovorax citrulli
Lee, Hoonsoo; Kim, Moon S.; Qin, Jianwei; Park, Eunsoo; Song, Yu-Rim; Oh, Chang-Sik
2017-01-01
The bacterial infection of seeds is one of the most important quality factors affecting yield. Conventional detection methods for bacteria-infected seeds, such as biological, serological, and molecular tests, are not feasible since they require expensive equipment, and furthermore, the testing processes are also time-consuming. In this study, we use the Raman hyperspectral imaging technique to distinguish bacteria-infected seeds from healthy seeds as a rapid, accurate, and non-destructive detection tool. We utilize Raman hyperspectral imaging data in the spectral range of 400–1800 cm−1 to determine the optimal band-ratio for the discrimination of watermelon seeds infected by the bacteria Acidovorax citrulli using ANOVA. Two bands at 1076.8 cm−1 and 437 cm−1 are selected as the optimal Raman peaks for the detection of bacteria-infected seeds. The results demonstrate that the Raman hyperspectral imaging technique has a good potential for the detection of bacteria-infected watermelon seeds and that it could form a suitable alternative to conventional methods. PMID:28946608
Taguchi optimization of bismuth-telluride based thermoelectric cooler
NASA Astrophysics Data System (ADS)
Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank
2017-07-01
In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.
Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers
NASA Technical Reports Server (NTRS)
Branner, G. R.; Chan, S.-P.
1975-01-01
This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.
A prospective randomised trial of PIN versus conventional stripping in varicose vein surgery.
Durkin, M. T.; Turton, E. P.; Scott, D. J.; Berridge, D. C.
1999-01-01
A prospective, randomised trial was carried out to examine the efficacy of perforate invagination (PIN, Credenhill Ltd, Derbyshire, UK) stripping of the long saphenous vein (LSV) in comparison to conventional stripping (Astratech AB, Sweden) in the surgical management of primary varicose veins. Eighty patients with primary varicosities secondary to sapheno-femoral junction (SFJ) incompetence and LSV reflux were recruited. Patients were randomised to PIN or conventional stripping with all other operative techniques remaining constant. Follow-up was performed at 1 and 6 weeks postoperatively. There were no statistically significant differences between the two techniques in terms of time taken to strip the vein, percentage of vein stripped or the area of bruising at 1 week. The size of the exit site was significantly smaller with the PIN device (P < or = 0.01). Optimal use of the conventional stripper provides results comparable to the PIN device. Choice of stripping device remains the surgeon's, bearing in mind that the PIN stripper achieves slightly better cosmesis. PMID:10364948
On advanced configuration enhance adaptive system optimization
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei
2017-10-01
For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.
Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald
2018-06-04
A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.
Accelerated wavefront determination technique for optical imaging through scattering medium
NASA Astrophysics Data System (ADS)
He, Hexiang; Wong, Kam Sing
2016-03-01
Wavefront shaping applied on scattering light is a promising optical imaging method in biological systems. Normally, optimized modulation can be obtained by a Liquid-Crystal Spatial Light Modulator (LC-SLM) and CCD hardware iteration. Here we introduce an improved method for this optimization process. The core of the proposed method is to firstly detect the disturbed wavefront, and then to calculate the modulation phase pattern by computer simulation. In particular, phase retrieval method together with phase conjugation is most effective. In this way, the LC-SLM based system can complete the wavefront optimization and imaging restoration within several seconds which is two orders of magnitude faster than the conventional technique. The experimental results show good imaging quality and may contribute to real time imaging recovery in scattering medium.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Cao, F; Ramaseshan, R; Corns, R; Harrop, S; Nuraney, N; Steiner, P; Aldridge, S; Liu, M; Carolan, H; Agranovich, A; Karva, A
2012-07-01
Craniospinal irradiation were traditionally treated the central nervous system using two or three adjacent field sets. A intensity-modulated radiotherapy (IMRT) plan (Jagged-Junction IMRT) which overcomes problems associated with field junctions and beam edge matching, improves planning and treatment setup efficiencies with homogenous target dose distribution was developed. Jagged-Junction IMRT was retrospectively planned on three patients with prescription of 36 Gy in 20 fractions and compared to conventional treatment plans. Planning target volume (PTV) included the whole brain and spinal canal to the S3 vertebral level. The plan employed three field sets, each with a unique isocentre. One field set with seven fields treated the cranium. Two field sets treated the spine, each set using three fields. Fields from adjacent sets were overlapped and the optimization process smoothly integrated the dose inside the overlapped junction. For the Jagged-Junction IMRT plans vs conventional technique, average homogeneity index equaled 0.08±0.01 vs 0.12±0.02, and conformity number equaled 0.79±0.01 vs 0.47±0.12. The 95% isodose surface covered (99.5±0.3)% of the PTV vs (98.1±2.0)%. Both Jagged-Junction IMRT plans and the conventional plans had good sparing of the organs at risk. Jagged-Junction IMRT planning provided good dose homogeneity and conformity to the target while maintaining a low dose to the organs at risk. Jagged-Junction IMRT optimization smoothly distributed dose in the junction between field sets. Since there was no beam matching, this treatment technique is less likely to produce hot or cold spots at the junction in contrast to conventional techniques. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, M; Li, R; Xing, L
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less
Li, Bai; Lin, Mu; Liu, Qiao; Li, Ya; Zhou, Changjun
2015-10-01
Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding. Graphical Abstract Protein folding optimization using 3D off-lattice model and advanced optimization techniques.
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M. A.
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M A
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods.
Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo
2012-01-01
Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.
NASA Astrophysics Data System (ADS)
Lynch, John A.; Zaim, Souhil; Zhao, Jenny; Stork, Alexander; Peterfy, Charles G.; Genant, Harry K.
2000-06-01
A technique for segmentation of articular cartilage from 3D MRI scans of the knee has been developed. It overcomes the limitations of the conventionally used region growing techniques, which are prone to inter- and intra-observer variability, and which can require much manual intervention. We describe a hybrid segmentation method combining expert knowledge with directionally oriented Canny filters, cost functions and cubic splines. After manual initialization, the technique utilized 3 cost functions which aided automated detection of cartilage and its boundaries. Using the sign of the edge strength, and the local direction of the boundary, this technique is more reliable than conventional 'snakes,' and the user had little control over smoothness of boundaries. This means that the automatically detected boundary can conform to the true shape of the real boundary, also allowing reliable detection of subtle local lesions on the normally smooth cartilage surface. Manual corrections, with possible re-optimization were sometimes needed. When compared to the conventionally used region growing techniques, this newly described technique measured local cartilage volume with 3 times better reproducibility, and involved two thirds less human interaction. Combined with the use of 3D image registration, the new technique should also permit unbiased segmentation of followup scans by automated initialization from a baseline segmentation of an earlier scan of the same patient.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Cooperative combinatorial optimization: evolutionary computation case study.
Burgin, Mark; Eberbach, Eugene
2008-01-01
This paper presents a formalization of the notion of cooperation and competition of multiple systems that work toward a common optimization goal of the population using evolutionary computation techniques. It is proved that evolutionary algorithms are more expressive than conventional recursive algorithms, such as Turing machines. Three classes of evolutionary computations are introduced and studied: bounded finite, unbounded finite, and infinite computations. Universal evolutionary algorithms are constructed. Such properties of evolutionary algorithms as completeness, optimality, and search decidability are examined. A natural extension of evolutionary Turing machine (ETM) model is proposed to properly reflect phenomena of cooperation and competition in the whole population.
Algorithmic Perspectives on Problem Formulations in MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.
Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...
2017-05-18
Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.
Simultaneous beam sampling and aperture shape optimization for SPORT.
Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei
2015-02-01
Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.
Simultaneous beam sampling and aperture shape optimization for SPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less
Wang, Ya-Qi; Wu, Zhen-Feng; Ke, Gang; Yang, Ming
2014-12-31
An effective vacuum assisted extraction (VAE) technique was proposed for the first time and applied to extract bioactive components from Andrographis paniculata. The process was carefully optimized by response surface methodology (RSM). Under the optimized experimental conditions, the best results were obtained using a boiling temperature of 65 °C, 50% ethanol concentration, 16 min of extraction time, one extraction cycles and a 12:1 liquid-solid ratio. Compared with conventional ultrasonic assisted extraction and heat reflux extraction, the VAE technique gave shorter extraction times and remarkable higher extraction efficiency, which indicated that a certain degree of vacuum gave the solvent a better penetration of the solvent into the pores and between the matrix particles, and enhanced the process of mass transfer. The present results demonstrated that VAE is an efficient, simple and fast method for extracting bioactive components from A. paniculata, which shows great potential for becoming an alternative technique for industrial scale-up applications.
Khogli, Ahmed Eltigani; Cauwels, Rita; Vercruysse, Chris; Verbeeck, Ronald; Martens, Luc
2013-01-01
Optimal pit and fissure sealing is determined by surface preparation techniques and choice of materials. This study aimed (i) to compare the microleakage and penetration depth of a hydrophilic sealant and a conventional resin-based sealant using one of the following preparation techniques: acid etching (AE) only, a diamond bur + AE, and Er:YAG laser combined with AE, and (ii) to evaluate the microleakage and penetration depth of the hydrophilic pit and fissure sealant on different surface conditions. Eighty recently extracted 3rd molars were randomly assigned to eight groups of ten teeth according to the material, preparation technique, and surface condition. For saliva contamination, 0.1 mL of fresh whole human saliva was used. All samples were submitted to 1000 thermal cycles and immersed in 2% methylene blue dye for 4 h. Sections were examined by a light microscope and analysed using image analysis software (Sigmascan(®)). The combination of Er:YAG + AE + conventional sealant showed the least microleakage. The sealing ability of the hydrophilic sealant was influenced by the surface condition. Er:YAG ablation significantly decreased the microleakage at the tooth-sealant interface compared to the non-invasive technique. The hydrophilic sealant applied on different surface conditions showed comparable result to the conventional resin-based sealant. © 2012 The Authors. International Journal of Paediatric Dentistry © 2012 BSPD, IAPD and Blackwell Publishing Ltd.
Optimization of white matter tractography for pre-surgical planning and image-guided surgery.
Arfanakis, Konstantinos; Gui, Minzhi; Lazar, Mariana
2006-01-01
Accurate localization of white matter fiber tracts in relation to brain tumors is a goal of critical importance to the neurosurgical community. White matter fiber tractography by means of diffusion tensor magnetic resonance imaging (DTI) is the only non-invasive method that can provide estimates of brain connectivity. However, conventional tractography methods are based on data acquisition techniques that suffer from image distortions and artifacts. Thus, a large percentage of white matter fiber bundles are distorted, and/or terminated early, while others are completely undetected. This severely limits the potential of fiber tractography in pre-surgical planning and image-guided surgery. In contrast, Turboprop-DTI is a technique that provides images with significantly fewer distortions and artifacts than conventional DTI data acquisition methods. The purpose of this study was to evaluate fiber tracking results obtained from Turboprop-DTI data. It was demonstrated that Turboprop may be a more appropriate DTI data acquisition technique for tracing white matter fibers than conventional DTI methods, especially in applications such as pre-surgical planning and image-guided surgery.
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
NASA Astrophysics Data System (ADS)
Chung, Brandon W.; Erler, Robert G.; Teslich, Nick E.
2016-05-01
Nuclear forensics requires accurate quantification of discriminating microstructural characteristics of the bulk nuclear material to identify its process history and provenance. Conventional metallographic preparation techniques for bulk plutonium (Pu) and uranium (U) metals are limited to providing information in two-dimension (2D) and do not allow for obtaining depth profile of the material. In this contribution, use of dual-beam focused ion-beam/scanning electron microscopy (FIB-SEM) to investigate the internal microstructure of bulk Pu and U metals is demonstrated. Our results demonstrate that the dual-beam methodology optimally elucidate microstructural features without preparation artifacts, and the three-dimensional (3D) characterization of inner microstructures can reveal salient microstructural features that cannot be observed from conventional metallographic techniques. Examples are shown to demonstrate the benefit of FIB-SEM in improving microstructural characterization of microscopic inclusions, particularly with respect to nuclear forensics.
Chung, Brandon W.; Erler, Robert G.; Teslich, Nick E.
2016-03-03
Nuclear forensics requires accurate quantification of discriminating microstructural characteristics of the bulk nuclear material to identify its process history and provenance. Conventional metallographic preparation techniques for bulk plutonium (Pu) and uranium (U) metals are limited to providing information in two-dimension (2D) and do not allow for obtaining depth profile of the material. In this contribution, use of dual-beam focused ion-beam/scanning electron microscopy (FIB-SEM) to investigate the internal microstructure of bulk Pu and U metals is demonstrated. Our results demonstrate that the dual-beam methodology optimally elucidate microstructural features without preparation artifacts, and the three-dimensional (3D) characterization of inner microstructures can revealmore » salient microstructural features that cannot be observed from conventional metallographic techniques. As a result, examples are shown to demonstrate the benefit of FIB-SEM in improving microstructural characterization of microscopic inclusions, particularly with respect to nuclear forensics.« less
Optimization of combined electron and photon beams for breast cancer
NASA Astrophysics Data System (ADS)
Xiong, W.; Li, J.; Chen, L.; Price, R. A.; Freedman, G.; Ding, M.; Qin, L.; Yang, J.; Ma, C.-M.
2004-05-01
Recently, intensity-modulated radiation therapy and modulated electron radiotherapy have gathered a growing interest for the treatment of breast and head and neck tumours. In this work, we carried out a study to combine electron and photon beams to achieve differential dose distributions for multiple target volumes simultaneously. A Monte Carlo based treatment planning system was investigated, which consists of a set of software tools to perform accurate dose calculation, treatment optimization, leaf sequencing and plan analysis. We compared breast treatment plans generated using this home-grown optimization and dose calculation software for different treatment techniques. Five different planning techniques have been developed for this study based on a standard photon beam whole breast treatment and an electron beam tumour bed cone down. Technique 1 includes two 6 MV tangential wedged photon beams followed by an anterior boost electron field. Technique 2 includes two 6 MV tangential intensity-modulated photon beams and the same boost electron field. Technique 3 optimizes two intensity-modulated photon beams based on a boost electron field. Technique 4 optimizes two intensity-modulated photon beams and the weight of the boost electron field. Technique 5 combines two intensity-modulated photon beams with an intensity-modulated electron field. Our results show that technique 2 can reduce hot spots both in the breast and the tumour bed compared to technique 1 (dose inhomogeneity is reduced from 34% to 28% for the target). Techniques 3, 4 and 5 can deliver a more homogeneous dose distribution to the target (with dose inhomogeneities for the target of 22%, 20% and 9%, respectively). In many cases techniques 3, 4 and 5 can reduce the dose to the lung and heart. It is concluded that combined photon and electron beam therapy may be advantageous for treating breast cancer compared to conventional treatment techniques using tangential wedged photon beams followed by a boost electron field.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
[Optimization of radiological scoliosis assessment].
Enríquez, Goya; Piqueras, Joaquim; Catalá, Ana; Oliva, Glòria; Ruiz, Agustí; Ribas, Montserrat; Duran, Carmina; Rodrigo, Carlos; Rodríguez, Eugenia; Garriga, Victoria; Maristany, Teresa; García-Fontecha, César; Baños, Joan; Muchart, Jordi; Alava, Fernando
2014-07-01
Most scoliosis are idiopathic (80%) and occur more frequently in adolescent girls. Plain radiography is the imaging method of choice, both for the initial study and follow-up studies but has the disadvantage of using ionizing radiation. The breasts are exposed to x-ray along these repeated examinations. The authors present a range of recommendations in order to optimize radiographic exam technique for both conventional and digital x-ray settings to prevent unnecessary patients' radiation exposure and to reduce the risk of breast cancer in patients with scoliosis. With analogue systems, leaded breast protectors should always be used, and with any radiographic equipment, analog or digital radiography, the examination should be performed in postero-anterior projection and optimized low-dose techniques. The ALARA (as low as reasonable achievable) rule should always be followed to achieve diagnostic quality images with the lowest feasible dose. Copyright © 2014. Published by Elsevier Espana.
Boukroufa, Meryem; Boutekedjiret, Chahrazed; Petigny, Loïc; Rakotomanomana, Njara; Chemat, Farid
2015-05-01
In this study, extraction of essential oil, polyphenols and pectin from orange peel has been optimized using microwave and ultrasound technology without adding any solvent but only "in situ" water which was recycled and used as solvent. The essential oil extraction performed by Microwave Hydrodiffusion and Gravity (MHG) was optimized and compared to steam distillation extraction (SD). No significant changes in yield were noticed: 4.22 ± 0.03% and 4.16 ± 0.05% for MHG and SD, respectively. After extraction of essential oil, residual water of plant obtained after MHG extraction was used as solvent for polyphenols and pectin extraction from MHG residues. Polyphenols extraction was performed by ultrasound-assisted extraction (UAE) and conventional extraction (CE). Response surface methodology (RSM) using central composite designs (CCD) approach was launched to investigate the influence of process variables on the ultrasound-assisted extraction (UAE). The statistical analysis revealed that the optimized conditions of ultrasound power and temperature were 0.956 W/cm(2) and 59.83°C giving a polyphenol yield of 50.02 mgGA/100 g dm. Compared with the conventional extraction (CE), the UAE gave an increase of 30% in TPC yield. Pectin was extracted by conventional and microwave assisted extraction. This technique gives a maximal yield of 24.2% for microwave power of 500 W in only 3 min whereas conventional extraction gives 18.32% in 120 min. Combination of microwave, ultrasound and the recycled "in situ" water of citrus peels allow us to obtain high added values compounds in shorter time and managed to make a closed loop using only natural resources provided by the plant which makes the whole process intensified in term of time and energy saving, cleanliness and reduced waste water. Copyright © 2014 Elsevier B.V. All rights reserved.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Optimal Draft requirement for vibratory tillage equipment using Genetic Algorithm Technique
NASA Astrophysics Data System (ADS)
Rao, Gowripathi; Chaudhary, Himanshu; Singh, Prem
2018-03-01
Agriculture is an important sector of Indian economy. Primary and secondary tillage operations are required for any land preparation process. Conventionally different tractor-drawn implements such as mouldboard plough, disc plough, subsoiler, cultivator and disc harrow, etc. are used for primary and secondary manipulations of soils. Among them, oscillatory tillage equipment is one such type which uses vibratory motion for tillage purpose. Several investigators have reported that the requirement for draft consumption in primary tillage implements is more as compared to oscillating one because they are always in contact with soil. Therefore in this paper, an attempt is made to find out the optimal parameters from the experimental data available in the literature to obtain minimum draft consumption through genetic algorithm technique.
Super-polishing of Zerodur aspheres by means of conventional polishing technology
NASA Astrophysics Data System (ADS)
Polak, Jaroslav; Klepetková, Eva; Pošmourný, Josef; Šulc, Miroslav; Procháska, František; Tomka, David; Matoušek, Ondřej; Poláková, Ivana; Šubert, Eduard
2015-01-01
This paper describes a quest to find simple technique to superpolish Zerodur asphere (55μm departure from best fit sphere) that could be employed on old fashion way 1-excenter optical polishing machine. The work focuses on selection of polishing technology, study of different polishing slurries and optimization of polishing setup. It is demonstrated that either by use of fine colloidal CeO2 slurry or by use of bowl-feed polishing setup with CeO2 charged pitch we could reach 0.4nm RMS roughness while removing <30nm of surface layer. This technique, although not optimized, was successfully used to improve surface roughness on already prepolished Zerodur aspheres without necessity to involve sophisticated super-polishing technology and highly trained manpower.
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud
2018-03-01
Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.
Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.
DAWN (Design Assistant Workstation) for advanced physical-chemical life support systems
NASA Technical Reports Server (NTRS)
Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.
1989-01-01
This paper reports the results of a project supported by the National Aeronautics and Space Administration, Office of Aeronautics and Space Technology (NASA-OAST) under the Advanced Life Support Development Program. It is an initial attempt to integrate artificial intelligence techniques (via expert systems) with conventional quantitative modeling tools for advanced physical-chemical life support systems. The addition of artificial intelligence techniques will assist the designer in the definition and simulation of loosely/well-defined life support processes/problems as well as assist in the capture of design knowledge, both quantitative and qualitative. Expert system and conventional modeling tools are integrated to provide a design workstation that assists the engineer/scientist in creating, evaluating, documenting and optimizing physical-chemical life support systems for short-term and extended duration missions.
NASA Astrophysics Data System (ADS)
Shirata, Kento; Inden, Yuki; Kasai, Seiya; Oya, Takahide; Hagiwara, Yosuke; Kaeriyama, Shunichi; Nakamura, Hideyuki
2016-04-01
We investigated the robust detection of surface electromyogram (EMG) signals based on the stochastic resonance (SR) phenomenon, in which the response to weak signals is optimized by adding noise, combined with multiple surface electrodes. Flexible carbon nanotube composite paper (CNT-cp) was applied to the surface electrode, which showed good performance that is comparable to that of conventional Ag/AgCl electrodes. The SR-based EMG signal system integrating an 8-Schmitt-trigger network and the multiple-CNT-cp-electrode array successfully detected weak EMG signals even when the subject’s body is in the motion, which was difficult to achieve using the conventional technique. The feasibility of the SR-based EMG detection technique was confirmed by demonstrating its applicability to robot hand control.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Training Scalable Restricted Boltzmann Machines Using a Quantum Annealer
NASA Astrophysics Data System (ADS)
Kumar, V.; Bass, G.; Dulny, J., III
2016-12-01
Machine learning and the optimization involved therein is of critical importance for commercial and military applications. Due to the computational complexity of many-variable optimization, the conventional approach is to employ meta-heuristic techniques to find suboptimal solutions. Quantum Annealing (QA) hardware offers a completely novel approach with the potential to obtain significantly better solutions with large speed-ups compared to traditional computing. In this presentation, we describe our development of new machine learning algorithms tailored for QA hardware. We are training restricted Boltzmann machines (RBMs) using QA hardware on large, high-dimensional commercial datasets. Traditional optimization heuristics such as contrastive divergence and other closely related techniques are slow to converge, especially on large datasets. Recent studies have indicated that QA hardware when used as a sampler provides better training performance compared to conventional approaches. Most of these studies have been limited to moderately-sized datasets due to the hardware restrictions imposed by exisitng QA devices, which make it difficult to solve real-world problems at scale. In this work we develop novel strategies to circumvent this issue. We discuss scale-up techniques such as enhanced embedding and partitioned RBMs which allow large commercial datasets to be learned using QA hardware. We present our initial results obtained by training an RBM as an autoencoder on an image dataset. The results obtained so far indicate that the convergence rates can be improved significantly by increasing RBM network connectivity. These ideas can be readily applied to generalized Boltzmann machines and we are currently investigating this in an ongoing project.
Joint optimization of source, mask, and pupil in optical lithography
NASA Astrophysics Data System (ADS)
Li, Jia; Lam, Edmund Y.
2014-03-01
Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.
Benwadih, M; Coppard, R; Bonrad, K; Klyszcz, A; Vuillaume, D
2016-12-21
Amorphous, sol-gel processed, indium gallium zinc oxide (IGZO) transistors on plastic substrate with a printable gate dielectric and an electron mobility of 4.5 cm 2 /(V s), as well as a mobility of 7 cm 2 /(V s) on solid substrate (Si/SiO 2 ) are reported. These performances are obtained using a low temperature pulsed light annealing technique. Ultraviolet (UV) pulsed light system is an innovative technique compared to conventional (furnace or hot-plate) annealing process that we successfully implemented on sol-gel IGZO thin film transistors (TFTs) made on plastic substrate. The photonic annealing treatment has been optimized to obtain IGZO TFTs with significant electrical properties. Organic gate dielectric layers deposited on this pulsed UV light annealed films have also been optimized. This technique is very promising for the development of amorphous IGZO TFTs on plastic substrates.
Recent developments in fast kurtosis imaging
NASA Astrophysics Data System (ADS)
Hansen, Brian; Jespersen, Sune N.
2017-09-01
Diffusion kurtosis imaging (DKI) is an extension of the popular diffusion tensor imaging (DTI) technique. DKI takes into account leading deviations from Gaussian diffusion stemming from a number of effects related to the microarchitecture and compartmentalization in biological tissues. DKI therefore offers increased sensitivity to subtle microstructural alterations over conventional diffusion imaging such as DTI, as has been demonstrated in numerous reports. For this reason, interest in routine clinical application of DKI is growing rapidly. In an effort to facilitate more widespread use of DKI, recent work by our group has focused on developing experimentally fast and robust estimates of DKI metrics. A significant increase in speed is made possible by a reduction in data demand achieved through rigorous analysis of the relation between the DKI signal and the kurtosis tensor based metrics. The fast DKI methods therefore need only 13 or 19 images for DKI parameter estimation compared to more than 60 for the most modest DKI protocols applied today. Closed form solutions also ensure rapid calculation of most DKI metrics. Some parameters can even be reconstructed in real time, which may be valuable in the clinic. The fast techniques are based on conventional diffusion sequences and are therefore easily implemented on almost any clinical system, in contrast to a range of other recently proposed advanced diffusion techniques. In addition to its general applicability, this also ensures that any acceleration achieved in conventional DKI through sequence or hardware optimization will also translate directly to fast DKI acquisitions. In this review, we recapitulate the theoretical basis for the fast kurtosis techniques and their relation to conventional DKI. We then discuss the currently available variants of the fast DKI methods, their strengths and weaknesses, as well as their respective realms of application. These range from whole body applications to methods mostly suited for spinal cord or peripheral nerve, and analysis specific to brain white matter. Having covered these technical aspects, we proceed to review the fast kurtosis literature including validation studies, organ specific optimization studies and results from clinical applications.
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heyman, Heino M.; Zhang, Xing; Tang, Keqi
2016-02-16
Metabolomics is the quantitative analysis of all metabolites in a given sample. Due to the chemical complexity of the metabolome, optimal separations are required for comprehensive identification and quantification of sample constituents. This chapter provides an overview of both conventional and advanced separations methods in practice for reducing the complexity of metabolite extracts delivered to the mass spectrometer detector, and covers gas chromatography (GC), liquid chromatography (LC), capillary electrophoresis (CE), supercritical fluid chromatography (SFC) and ion mobility spectrometry (IMS) separation techniques coupled with mass spectrometry (MS) as both uni-dimensional and as multi-dimensional approaches.
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Throughput of Coded Optical CDMA Systems with AND Detectors
NASA Astrophysics Data System (ADS)
Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.
2012-09-01
Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.
NASA Astrophysics Data System (ADS)
Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.
2005-01-01
Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.
Landcover Based Optimal Deconvolution of PALS L-band Microwave Brightness Temperature
NASA Technical Reports Server (NTRS)
Limaye, Ashutosh S.; Crosson, William L.; Laymon, Charles A.; Njoku, Eni G.
2004-01-01
An optimal de-convolution (ODC) technique has been developed to estimate microwave brightness temperatures of agricultural fields using microwave radiometer observations. The technique is applied to airborne measurements taken by the Passive and Active L and S band (PALS) sensor in Iowa during Soil Moisture Experiments in 2002 (SMEX02). Agricultural fields in the study area were predominantly soybeans and corn. The brightness temperatures of corn and soybeans were observed to be significantly different because of large differences in vegetation biomass. PALS observations have significant over-sampling; observations were made about 100 m apart and the sensor footprint extends to about 400 m. Conventionally, observations of this type are averaged to produce smooth spatial data fields of brightness temperatures. However, the conventional approach is in contrast to reality in which the brightness temperatures are in fact strongly dependent on landcover, which is characterized by sharp boundaries. In this study, we mathematically de-convolve the observations into brightness temperature at the field scale (500-800m) using the sensor antenna response function. The result is more accurate spatial representation of field-scale brightness temperatures, which may in turn lead to more accurate soil moisture retrieval.
Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data
NASA Astrophysics Data System (ADS)
Ye, Shirong; Xia, Pengfei; Cai, Changsheng
2016-09-01
The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD) models are improved and thus the zenith wet delay (ZWD) can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV) is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Liu, A.; Bross, A.; Neuffer, D.
2015-05-28
This paper describes the strategy for optimizing the magnetic horn for the neutrinos from STORed Muons (nuSTORM) facility. The nuSTORM magnetic horn is the primary collection device for the secondary particles generated by bombarding a solid target with 120 GeV protons. As a consequence of the non-conventional beamline designed for nuSTORM, the requirements on the horn are different from those for a conventional neutrino beamline. At nuSTORM, muons decay while circulating in the storage ring, and the detectors are placed downstream of the production straight so as to be exposed to the neutrinos from muon decay. nuSTORM aims at preciselymore » measuring the neutrino cross sections, and providing a definitive statement about the existence of sterile neutrinos. The nuSTORM horn aims at focusing the pions into a certain phase space so that more muons from pion decay can be accepted by the decay ring. The paper demonstrates a numerical method that was developed to optimize the horn design to gain higher neutrino flux from the circulating muons. A Genetic Algorithm (GA) was applied to the simultaneous optimization of the two objectives in this study. In conclusion, the application of the technique discussed in this paper is not limited to either the nuSTORM facility or muon based facilities, but can be used for other neutrino facilities that use magnetic horns as collection devices.« less
Improved specimen reconstruction by Hilbert phase contrast tomography.
Barton, Bastian; Joos, Friederike; Schröder, Rasmus R
2008-11-01
The low signal-to-noise ratio (SNR) in images of unstained specimens recorded with conventional defocus phase contrast makes it difficult to interpret 3D volumes obtained by electron tomography (ET). The high defocus applied for conventional tilt series generates some phase contrast but leads to an incomplete transfer of object information. For tomography of biological weak-phase objects, optimal image contrast and subsequently an optimized SNR are essential for the reconstruction of details such as macromolecular assemblies at molecular resolution. The problem of low contrast can be partially solved by applying a Hilbert phase plate positioned in the back focal plane (BFP) of the objective lens while recording images in Gaussian focus. Images recorded with the Hilbert phase plate provide optimized positive phase contrast at low spatial frequencies, and the contrast transfer in principle extends to the information limit of the microscope. The antisymmetric Hilbert phase contrast (HPC) can be numerically converted into isotropic contrast, which is equivalent to the contrast obtained by a Zernike phase plate. Thus, in-focus HPC provides optimal structure factor information without limiting effects of the transfer function. In this article, we present the first electron tomograms of biological specimens reconstructed from Hilbert phase plate image series. We outline the technical implementation of the phase plate and demonstrate that the technique is routinely applicable for tomography. A comparison between conventional defocus tomograms and in-focus HPC volumes shows an enhanced SNR and an improved specimen visibility for in-focus Hilbert tomography.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Differential tracking data types for accurate and efficient Mars planetary navigation
NASA Technical Reports Server (NTRS)
Edwards, C. D., Jr.; Kahn, R. D.; Folkner, W. M.; Border, J. S.
1991-01-01
Ways in which high-accuracy differential observations of two or more deep space vehicles can dramatically extend the power of earth-based tracking over conventional range and Doppler tracking are discussed. Two techniques - spacecraft-spacecraft differential very long baseline interferometry (S/C-S/C Delta(VLBI)) and same-beam interferometry (SBI) - are discussed. The tracking and navigation capabilities of conventional range, Doppler, and quasar-relative Delta(VLBI) are reviewed, and the S/C-S/C Delta (VLBI) and SBI types are introduced. For each data type, the formation of the observable is discussed, an error budget describing how physical error sources manifest themselves in the observable is presented, and potential applications of the technique for Space Exploration Initiative scenarios are examined. Requirements for spacecraft and ground systems needed to enable and optimize these types of observations are discussed.
Optimal Pitch Thrust-Vector Angle and Benefits for all Flight Regimes
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Bolonkin, Alexander
2000-01-01
The NASA Dryden Flight Research Center is exploring the optimum thrust-vector angle on aircraft. Simple aerodynamic performance models for various phases of aircraft flight are developed and optimization equations and algorithms are presented in this report. Results of optimal angles of thrust vectors and associated benefits for various flight regimes of aircraft (takeoff, climb, cruise, descent, final approach, and landing) are given. Results for a typical wide-body transport aircraft are also given. The benefits accruable for this class of aircraft are small, but the technique can be applied to other conventionally configured aircraft. The lower L/D aerodynamic characteristics of fighters generally would produce larger benefits than those produced for transport aircraft.
Plasma Enhanced Growth of Carbon Nanotubes For Ultrasensitive Biosensors
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Meyyappan, M.
2004-01-01
The multitude of considerations facing nanostructure growth and integration lends itself to combinatorial optimization approaches. Rapid optimization becomes even more important with wafer-scale growth and integration processes. Here we discuss methodology for developing plasma enhanced CVD growth techniques for achieving individual, vertically aligned carbon nanostructures that show excellent properties as ultrasensitive electrodes for nucleic acid detection. We utilize high throughput strategies for optimizing the upstream and downstream processing and integration of carbon nanotube electrodes as functional elements in various device types. An overview of ultrasensitive carbon nanotube based sensor arrays for electrochemical bio-sensing applications and the high throughput methodology utilized to combine novel electrode technology with conventional MEMS processing will be presented.
Plasma Enhanced Growth of Carbon Nanotubes For Ultrasensitive Biosensors
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Li, J.; Ye, Q.; Koehne, J.; Chen, H.; Meyyappan, M.
2004-01-01
The multitude of considerations facing nanostructure growth and integration lends itself to combinatorial optimization approaches. Rapid optimization becomes even more important with wafer-scale growth and integration processes. Here we discuss methodology for developing plasma enhanced CVD growth techniques for achieving individual, vertically aligned carbon nanostructures that show excellent properties as ultrasensitive electrodes for nucleic acid detection. We utilize high throughput strategies for optimizing the upstream and downstream processing and integration of carbon nanotube electrodes as functional elements in various device types. An overview of ultrasensitive carbon nanotube based sensor arrays for electrochemical biosensing applications and the high throughput methodology utilized to combine novel electrode technology with conventional MEMS processing will be presented.
Vagadia, Brinda Harish; Raghavan, Vijaya
2018-01-01
Soymilk is lower in calories compared to cow’s milk, since it is derived from a plant source (no cholesterol) and is an excellent source of protein. Despite the beneficial factors, soymilk is considered as one of the most controversial foods in the world. It contains serine protease inhibitors which lower its nutritional value and digestibility. Processing techniques for the elimination of trypsin inhibitors and lipoxygenase, which have shorter processing time and lower production costs are required for the large-scale manufacturing of soymilk. In this study, the suitable conditions of time and temperature are optimized during microwave processing to obtain soymilk with maximum digestibility with inactivation of trypsin inhibitors, in comparison to the conventional thermal treatment. The microwave processing conditions at a frequency of 2.45 GHz and temperatures of 70 °C, 85 °C and 100 °C for 2, 5 and 8 min were investigated and were compared to conventional thermal treatments at the same temperature for 10, 20 and 30 min. Response surface methodology is used to design and optimize the experimental conditions. Thermal processing was able to increase digestibility by 7% (microwave) and 11% (conventional) compared to control, while trypsin inhibitor activity reduced to 1% in microwave processing and 3% in conventional thermal treatment when compared to 10% in raw soybean. PMID:29316679
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
IMRT for Image-Guided Single Vocal Cord Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osman, Sarah O.S., E-mail: s.osman@erasmusmc.nl; Astreinidou, Eleftheria; Boer, Hans C.J. de
2012-02-01
Purpose: We have been developing an image-guided single vocal cord irradiation technique to treat patients with stage T1a glottic carcinoma. In the present study, we compared the dose coverage to the affected vocal cord and the dose delivered to the organs at risk using conventional, intensity-modulated radiotherapy (IMRT) coplanar, and IMRT non-coplanar techniques. Methods and Materials: For 10 patients, conventional treatment plans using two laterally opposed wedged 6-MV photon beams were calculated in XiO (Elekta-CMS treatment planning system). An in-house IMRT/beam angle optimization algorithm was used to obtain the coplanar and non-coplanar optimized beam angles. Using these angles, the IMRTmore » plans were generated in Monaco (IMRT treatment planning system, Elekta-CMS) with the implemented Monte Carlo dose calculation algorithm. The organs at risk included the contralateral vocal cord, arytenoids, swallowing muscles, carotid arteries, and spinal cord. The prescription dose was 66 Gy in 33 fractions. Results: For the conventional plans and coplanar and non-coplanar IMRT plans, the population-averaged mean dose {+-} standard deviation to the planning target volume was 67 {+-} 1 Gy. The contralateral vocal cord dose was reduced from 66 {+-} 1 Gy in the conventional plans to 39 {+-} 8 Gy and 36 {+-} 6 Gy in the coplanar and non-coplanar IMRT plans, respectively. IMRT consistently reduced the doses to the other organs at risk. Conclusions: Single vocal cord irradiation with IMRT resulted in good target coverage and provided significant sparing of the critical structures. This has the potential to improve the quality-of-life outcomes after RT and maintain the same local control rates.« less
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Paswan, Suresh K; Saini, T R
2017-12-01
The emulsifiers in an exceedingly higher level are used in the preparation of drug loaded polymeric nanoparticles prepared by emulsification solvent evaporation method. This creates great problem to the formulator due to their serious toxicities when it is to be administered by parenteral route. The final product is therefore required to be freed from the used surfactants by the conventional purification techniques which is a cumbersome job. The solvent resistant stirred cell ultrafiltration unit (Millipore) was used in this study using polyethersulfone ultrafiltration membrane (Biomax®) having pore size of NMWL 300 KDa as the membrane filter. The purification efficiency of this technique was compared with the conventional centrifugation technique. The flow rate of ultrafiltration was optimized for removal of surfactant (polyvinyl alcohol) impurities to the acceptable levels in 1-3.5 h from the nanoparticle dispersion of tamoxifen prepared by emulsification solvent evaporation method. The present investigations demonstrate the application of solvent resistant stirred cell ultrafiltration technique for removal of toxic impurities of surfactant (PVA) from the polymeric drug nanoparticles (tamoxifen) prepared by emulsification solvent evaporation method. This technique offers added benefit of producing more concentrated nanoparticles dispersion without causing significant particle size growth which is observed in other purification techniques, e.g., centrifugation and ultracentrifugation.
Poojary, Mahesha M; Passamonti, Paolo
2016-12-09
This paper reports on improved conventional thermal silylation (CTS) and microwave-assisted silylation (MAS) methods for simultaneous determination of tocopherols and sterols by gas chromatography. Reaction parameters in each of the methods developed were systematically optimized using a full factorial design followed by a central composite design. Initially, experimental conditions for CTS were optimized using a block heater. Further, a rapid MAS was developed and optimized. To understand microwave heating mechanisms, MAS was optimized by two distinct modes of microwave heating: temperature-controlled MAS and power-controlled MAS, using dedicated instruments where reaction temperature and microwave power level were controlled and monitored online. Developed methods: were compared with routine overnight derivatization. On a comprehensive level, while both CTS and MAS were found to be efficient derivatization techniques, MAS significantly reduced the reaction time. The optimal derivatization temperature and time for CTS found to be 55°C and 54min, while it was 87°C and 1.2min for temperature-controlled MAS. Further, a microwave power of 300W and a derivatization time 0.5min found to be optimal for power-controlled MAS. The use of an appropriate derivatization solvent, such as pyridine, was found to be critical for the successful determination. Catalysts, like potassium acetate and 4-dimethylaminopyridine, enhanced the efficiency slightly. The developed methods showed excellent analytical performance in terms of linearity, accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.
Yang, Lei; Sun, Xiaowei; Yang, Fengjian; Zhao, Chunjian; Zhang, Lin; Zu, Yuangang
2012-01-01
Ionic liquid based, microwave-assisted extraction (ILMAE) was successfully applied to the extraction of proanthocyanidins from Larix gmelini bark. In this work, in order to evaluate the performance of ionic liquids in the microwave-assisted extraction process, a series of 1-alkyl-3-methylimidazolium ionic liquids with different cations and anions were evaluated for extraction yield, and 1-butyl-3-methylimidazolium bromide was selected as the optimal solvent. In addition, the ILMAE procedure for the proanthocyanidins was optimized and compared with other conventional extraction techniques. Under the optimized conditions, satisfactory extraction yield of the proanthocyanidins was obtained. Relative to other methods, the proposed approach provided higher extraction yield and lower energy consumption. The Larix gmelini bark samples before and after extraction were analyzed by Thermal gravimetric analysis, Fourier-transform infrared spectroscopy and characterized by scanning electron microscopy. The results showed that the ILMAE method is a simple and efficient technique for sample preparation. PMID:22606036
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Pozzi, P; Wilding, D; Soloviev, O; Verstraete, H; Bliek, L; Vdovin, G; Verhaegen, M
2017-01-23
The quality of fluorescence microscopy images is often impaired by the presence of sample induced optical aberrations. Adaptive optical elements such as deformable mirrors or spatial light modulators can be used to correct aberrations. However, previously reported techniques either require special sample preparation, or time consuming optimization procedures for the correction of static aberrations. This paper reports a technique for optical sectioning fluorescence microscopy capable of correcting dynamic aberrations in any fluorescent sample during the acquisition. This is achieved by implementing adaptive optics in a non conventional confocal microscopy setup, with multiple programmable confocal apertures, in which out of focus light can be separately detected, and used to optimize the correction performance with a sampling frequency an order of magnitude faster than the imaging rate of the system. The paper reports results comparing the correction performances to traditional image optimization algorithms, and demonstrates how the system can compensate for dynamic changes in the aberrations, such as those introduced during a focal stack acquisition though a thick sample.
Dosimetric comparison between VMAT and RC3D techniques: case of prostate treatment
NASA Astrophysics Data System (ADS)
Chemingui, Fatima Zohra; Benrachi, Fatima; Bali, Mohamed Saleh; Ladjal, Hamid
2017-09-01
Considered as the second men cancer in Algeria, prostate cancer is treated in 70% by radiation. That's why radiation therapy is therapeutic weapon for prostate cancer. Conformational Radiotherapy in 3D is the most common technique [1-5]. The use of conventionally optimized treatment plans was compared at case scenario of optimized treatment plans VMAT for prostate cancer. The evaluation of the two optimizations strategies focused on the resulting plans ability to retain dose objectives under the influence of patient set up. Dose Volume Histogram in the Planning Target Volume and dose in the Organs At Risks were used to calculate the conformity index, and evaluation ratio of irradiated volume which represent the main tool of comparison [6,7]. The situation was analysed systematically. The 14% dose increase in the target leads to a decrease in the dose in adjacent organs with 39% in the bladder. Therefore, the criterion for better efficacy and less toxicity reveal that VMAT is the best choice.
NASA Technical Reports Server (NTRS)
Massie, N. A.; Oster, Yale; Poe, Greg; Seppala, Lynn; Shao, Mike
1992-01-01
Telescopes that are designed for the unconventional imaging of near-earth satellites must follow unique design rules. The costs must be reduced substantially over those of the conventional telescope designs, and the design must accommodate a technique to circumvent atmospheric distortion of the image. Apertures of 12 m and more along with altitude-altitude mounts that provide high tracking rates are required. A novel design for such a telescope, optimized for speckle imaging, has been generated. Its mount closely resembles a radar mount, and it does not use the conventional dome. Costs for this design are projected to be considerably lower than those for the conventional designs. Results of a design study are presented with details of the electro-optical and optical designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, A; Seabrook, G; Brakken, A
Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6.more » They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.« less
NASA Astrophysics Data System (ADS)
Budiyono, T.; Budi, W. S.; Hidayanto, E.
2016-03-01
Radiation therapy for brain malignancy is done by giving a dose of radiation to a whole volume of the brain (WBRT) followed by a booster at the primary tumor with more advanced techniques. Two external radiation fields given from the right and left side. Because the shape of the head, there will be an unavoidable hotspot radiation dose of greater than 107%. This study aims to optimize planning of radiation therapy using field in field multi-leaf collimator technique. A study of 15 WBRT samples with CT slices is done by adding some segments of radiation in each field of radiation and delivering appropriate dose weighting using a TPS precise plan Elekta R 2.15. Results showed that this optimization a more homogeneous radiation on CTV target volume, lower dose in healthy tissue, and reduced hotspots in CTV target volume. Comparison results of field in field multi segmented MLC technique with standard conventional technique for WBRT are: higher average minimum dose (77.25% ± 0:47%) vs (60% ± 3:35%); lower average maximum dose (110.27% ± 0.26%) vs (114.53% ± 1.56%); lower hotspot volume (5.71% vs 27.43%); and lower dose on eye lenses (right eye: 9.52% vs 18.20%); (left eye: 8.60% vs 16.53%).
SU-E-T-255: Optimized Supine Craniospinal Irradiation with Image-Guided and Field Matched Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Z; Holupka, E; Naughton, J
2014-06-01
Purpose: Conventional craniospinal irradiation (CSI) challenges include dose inhomogeneity at field junctions and position uncertainty due to the field divergence, particular for the two spinal fields. Here we outline a new supine CSI technique to address these difficulties. Methods: Patient was simulated in supine position. The cranial fields had isocenter at C2/C3 vertebral and were matched with 1st spinal field. Their inferior border was chosen to avoid the shoulder, as well as chin from the 1st spine field. Their collimator angles were dependent on asymmetry jaw setting of the 1st spinal field. With couch rotation, the spinal field gantry anglesmore » were adjusted to ensure, the inferior border of 1st and superior border of 2nd spinal fields were perpendicular to the table top. The radio-opaque wire position for the spinal junction was located initially by the light field from an anterior setup beam, and was finalized by the portal imaging of the 1st spinal field. With reference to the spinal junction wire, the fields were matched by positioning the isocenter of the 2nd spinal field. A formula was derived to optimize supine CSI treatment planning, by utilizing the relationship among the Yjaw setting, the spinal field gantry angles, cranial field collimator angles, and the spinal field isocenters location. The plan was delivered with portal imaging alignment for the both cranial and spinal junctions. Results: Utilizing this technique with matching beams, and conventional technique such as feathering and forwarding planning, a homogenous dose distribution was achieved throughout the entire CSI treatment volume including the spinal junction. Placing the spinal junction wire visualized in both spinal portals, allows for precise determination and verification of the appropriate match line of the spine fields. Conclusion: This technique of optimization supine CSI achieved a homogenous dose distributions and patient localization accuracy with image-guided and matched beams.« less
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul
2014-01-01
This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Different types of maximum power point tracking techniques for renewable energy systems: A survey
NASA Astrophysics Data System (ADS)
Khan, Mohammad Junaid; Shukla, Praveen; Mustafa, Rashid; Chatterji, S.; Mathew, Lini
2016-03-01
Global demand for electricity is increasing while production of energy from fossil fuels is declining and therefore the obvious choice of the clean energy source that is abundant and could provide security for development future is energy from the sun. In this paper, the characteristic of the supply voltage of the photovoltaic generator is nonlinear and exhibits multiple peaks, including many local peaks and a global peak in non-uniform irradiance. To keep global peak, MPPT is the important component of photovoltaic systems. Although many review articles discussed conventional techniques such as P & O, incremental conductance, the correlation ripple control and very few attempts have been made with intelligent MPPT techniques. This document also discusses different algorithms based on fuzzy logic, Ant Colony Optimization, Genetic Algorithm, artificial neural networks, Particle Swarm Optimization Algorithm Firefly, Extremum seeking control method and hybrid methods applied to the monitoring of maximum value of power at point in systems of photovoltaic under changing conditions of irradiance.
Ohman, A; Kull, L; Andersson, J; Flygare, L
2008-12-01
To measure organ doses and calculate effective doses for pre-operative radiographic examination of lower third molars with CT and conventional radiography (CR). Measurements of organ doses were made on an anthropomorphic head phantom with lithium fluoride thermoluminescent dosemeters. The dosemeters were placed in regions corresponding to parotid and submandibular glands, mandibular bone, thyroid gland, skin, eye lenses and brain. The organ doses were used for the calculation of effective doses according to proposed International Commission on Radiological Protection 2005 guidelines. For the CT examination, a Siemens Somatom Plus 4 Volume Zoom was used and exposure factors were set to 120 kV and 100 mAs. For conventional radiographs, a Scanora unit was used and panoramic, posteroanterior, stereographic (scanogram) and conventional spiral tomographic views were exposed. The effective doses were 0.25 mSv, 0.060 mSv and 0.093 mSv for CT, CR without conventional tomography and CR with conventional spiral tomography, respectively. The effective dose is low when CT examination with exposure factors optimized for the examination of bone structures is performed. However, the dose is still about four times as high as for CR without tomography. CT should therefore not be a standard method for the examination of lower third molars. In cases where there is a close relationship between the tooth and the inferior alveolar nerve the advantages of true sectional imaging, such as CT, outweighs the higher effective dose and is recommended. Further reduction in the dose is feasible with further optimization of examination protocols and the development of newer techniques.
Hiremath, Mallayya C; Srivastava, Pooja
2016-01-01
The purpose of this in vitro study was to compare four methods of root canal obturation in primary teeth using conventional radiography. A total of 96 root canals of primary molars were prepared and obturated with zinc oxide eugenol. Obturation methods compared were endodontic pressure syringe, insulin syringe, jiffy tube, and local anesthetic syringe. The root canal obturations were evaluated by conventional radiography for the length of obturation and presence of voids. The obtained data were analyzed using Chi-square test. The results showed significant differences between the four groups for the length of obturation (P < 0.05). The endodontic pressure syringe showed the best results (98.5% optimal fillings) and jiffy tube showed the poor results (37.5% optimal fillings) for the length of obturation. The insulin syringe (79.2% optimal fillings) and local anesthetic syringe (66.7% optimal fillings) showed acceptable results for the length of root canal obturation. However, minor voids were present in all the four techniques used. Endodontic pressure syringe produced the best results in terms of length of obturation and controlling paste extrusion from the apical foramen. However, insulin syringe and local anesthetic syringe can be used as effective alternative methods.
TiO2-coated mesoporous carbon: conventional vs. microwave-annealing process.
Coromelci-Pastravanu, Cristina; Ignat, Maria; Popovici, Evelini; Harabagiu, Valeria
2014-08-15
The study of coating mesoporous carbon materials with titanium oxide nanoparticles is now becoming a promising and challenging area of research. To optimize the use of carbon materials in various applications, it is necessary to attach functional groups or other nanostructures to their surface. The combination of the distinctive properties of mesoporous carbon materials and titanium oxide is expected to be applied in field emission displays, nanoelectronic devices, novel catalysts, and polymer or ceramic reinforcement. But, their synthesis is still largely based on conventional techniques, such as wet impregnation followed by chemical reduction of the metal nanoparticle precursors, which takes time and money. The thermal heating based techniques are time consuming and often lack control of particle size and morphology. Hence, since there is a growing interest in microwave technology, an alternative way of power input into chemical reactions through dielectric heating is the use of microwaves. This work is focused on the advantages of microwave-assisted synthesis of TiO2-coated mesoporous carbon over conventional thermal heating method. The reviewed studies showed that the microwave-assisted synthesis of such composites allows processes to be completed within a shorter reaction time allowing the nanoparticles formation with superior properties than that obtained by conventional method. Copyright © 2014 Elsevier B.V. All rights reserved.
Electrospinning bioactive supramolecular polymers from water.
Tayi, Alok S; Pashuck, E Thomas; Newcomb, Christina J; McClendon, Mark T; Stupp, Samuel I
2014-04-14
Electrospinning is a high-throughput, low-cost technique for manufacturing long fibers from solution. Conventionally, this technique is used with covalent polymers with large molecular weights. We report here the electrospinning of functional peptide-based supramolecular polymers from water at very low concentrations (<4 wt %). Molecules with low molecular weights (<1 kDa) could be electrospun because they self-assembled into one-dimensional supramolecular polymers upon solvation and the critical parameters of viscosity, solution conductivity, and surface tension were optimized for this technique. The supramolecular structure of the electrospun fibers could ensure that certain residues, like bioepitopes, are displayed on the surface even after processing. This system provides an opportunity to electrospin bioactive supramolecular materials from water for biomedical applications.
Ramrakhyani, A K; Mirabbasi, S; Mu Chiao
2011-02-01
Resonance-based wireless power delivery is an efficient technique to transfer power over a relatively long distance. This technique typically uses four coils as opposed to two coils used in conventional inductive links. In the four-coil system, the adverse effects of a low coupling coefficient between primary and secondary coils are compensated by using high-quality (Q) factor coils, and the efficiency of the system is improved. Unlike its two-coil counterpart, the efficiency profile of the power transfer is not a monotonically decreasing function of the operating distance and is less sensitive to changes in the distance between the primary and secondary coils. A four-coil energy transfer system can be optimized to provide maximum efficiency at a given operating distance. We have analyzed the four-coil energy transfer systems and outlined the effect of design parameters on power-transfer efficiency. Design steps to obtain the efficient power-transfer system are presented and a design example is provided. A proof-of-concept prototype system is implemented and confirms the validity of the proposed analysis and design techniques. In the prototype system, for a power-link frequency of 700 kHz and a coil distance range of 10 to 20 mm, using a 22-mm diameter implantable coil resonance-based system shows a power-transfer efficiency of more than 80% with an enhanced operating range compared to ~40% efficiency achieved by a conventional two-coil system.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Hybrid Techniques for Optimizing Complex Systems
2009-12-01
Service , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of...These vectors are randomly generated, and conventional functional simulation propagates signatures to the internal and output nodes. In a typical...instance, if two internal nodes x and y satisfy the property (y = 1) ⇒ (x = 1), where ⇒ denotes “implies”, then y gives information about x whenever y = 1
Improving Simulated Annealing by Recasting it as a Non-Cooperative Game
NASA Technical Reports Server (NTRS)
Wolpert, David; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.
Noninvasive detection of cardiovascular pulsations by optical Doppler techniques
NASA Astrophysics Data System (ADS)
Hong, HyunDae; Fox, Martin D.
1997-10-01
A system has been developed based on the measurement of skin surface vibration that can be used to detect the underlying vascular wall motion of superficial arteries and the chest wall. Data obtained from tissue phantoms suggested that the detected signals were related to intravascular pressure, an important clinical and physiological parameter. Unlike the conventional optical Doppler techniques that have been used to measure blood perfusion in skin layers and blood flow within superficial arteries, the present system was optimized to pick up skin vibrations. An optical interferometer with a 633-nm He:Ne laser was utilized to detect micrometer displacements of the skin surface. Motion velocity profiles of the skin surface near each superficial artery and auscultation points on a chest for the two heart valve sounds exhibited distinctive profiles. The theoretical and experimental results demonstrated that the system detected the velocity of skin movement, which is related to the time derivative of the pressure. The system also reduces the loading effect on the pulsation signals and heart sounds produced by the conventional piezoelectric vibration sensors. The system's sensitivity, which could be optimized further, was 366.2 micrometers /s for the present research. Overall, optical cardiovascular vibrometry has the potential to become a simple noninvasive approach to cardiovascular screening.
Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour
Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad
2013-01-01
A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843
Comparison of weighting techniques for acoustic full waveform inversion
NASA Astrophysics Data System (ADS)
Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo
2017-12-01
To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.
Priya, Anshu; Hait, Subrata
2017-03-01
Waste electrical and electronic equipment (WEEE) or electronic waste (e-waste) is one of the fastest growing waste streams in the urban environment worldwide. The core component of printed circuit board (PCB) in e-waste contains a complex array of metals in rich quantity, some of which are toxic to the environment and all of which are valuable resources. Therefore, the recycling of e-waste is an important aspect not only from the point of waste treatment but also from the recovery of metals for economic growth. Conventional approaches for recovery of metals from e-waste, viz. pyrometallurgical and hydrometallurgical techniques, are rapid and efficient, but cause secondary pollution and economically unviable. Limitations of the conventional techniques have led to a shift towards biometallurgical technique involving microbiological leaching of metals from e-waste in eco-friendly manner. However, optimization of certain biotic and abiotic factors such as microbial species, pH, temperature, nutrients, and aeration rate affect the bioleaching process and can lead to profitable recovery of metals from e-waste. The present review provides a comprehensive assessment on the metallurgical techniques for recovery of metals from e-waste with special emphasis on bioleaching process and the associated factors.
Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep
2014-02-01
Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality.
Digital mammography: physical principles and future applications.
Gambaccini, Mauro; Baldelli, Paola
2003-01-01
Mammography is currently considered the best tool for the detection of breast cancer, pathology with a rate of incidence in constant increase. To produce the radiological picture a screen film combination is conventionally used. One of the inherent limitations of screen- film combination is the fact that the detection, display and storage processes are one and the same, making it impossible to separately optimize each stage. These limitations can be overcome with digital systems. In this work we evaluate the main characteristics of digital detectors available on the market and we compare the performance of digital and conventional systems. Digital mammography, due to the possibility to process images, offers many potential advantages, among these the possibility to introduce the dual-energy technique which employs the composition of two digital images obtained with two different energies to enhance the inherent contrast of pathologies by removing the uniform background. This technique was previously tested by using synchrotron monochromatic beam and a digital detector, and then the Senographe 2000D full-field digital system manufactured by GE Medical Systems. In this work we present preliminary results and the future applications of this technique.
Fundamentals and techniques of nonimaging optics research
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1987-07-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line or trumpet concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. Present efforts can be classed into two main areas: (1) classical geometrical nonimaging optics, and (2) logical extensions of nonimaging concepts to the physical optics domain.
Fundamentals and techniques of nonimaging optics research at the University of Chicago
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1986-11-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. In the most recent phase, our efforts can be classed into two main areas; (a) ''classical'' geometrical nonimaging optics; and (b) logical extensions of nonimaging concepts to the physical optics domain.
High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging
Persoons, Tim; O’Donovan, Tadhg S.
2011-01-01
The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods. PMID:22346564
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2017-09-01
A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.
Heleno, Sandrina A; Diz, Patrícia; Prieto, M A; Barros, Lillian; Rodrigues, Alírio; Barreiro, Maria Filomena; Ferreira, Isabel C F R
2016-04-15
Ergosterol, a molecule with high commercial value, is the most abundant mycosterol in Agaricus bisporus L. To replace common conventional extraction techniques (e.g. Soxhlet), the present study reports the optimal ultrasound-assisted extraction conditions for ergosterol. After preliminary tests, the results showed that solvents, time and ultrasound power altered the extraction efficiency. Using response surface methodology, models were developed to investigate the favourable experimental conditions that maximize the extraction efficiency. All statistical criteria demonstrated the validity of the proposed models. Overall, ultrasound-assisted extraction with ethanol at 375 W during 15 min proved to be as efficient as the Soxhlet extraction, yielding 671.5 ± 0.5mg ergosterol/100 g dw. However, with n-hexane extracts with higher purity (mg ergosterol/g extract) were obtained. Finally, it was proposed for the removal of the saponification step, which simplifies the extraction process and makes it more feasible for its industrial transference. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wagner, James M; Alper, Hal S
2016-04-01
Coupling the tools of synthetic biology with traditional molecular genetic techniques can enable the rapid prototyping and optimization of yeast strains. While the era of yeast synthetic biology began in the well-characterized model organism Saccharomyces cerevisiae, it is swiftly expanding to include non-conventional yeast production systems such as Hansenula polymorpha, Kluyveromyces lactis, Pichia pastoris, and Yarrowia lipolytica. These yeasts already have roles in the manufacture of vaccines, therapeutic proteins, food additives, and biorenewable chemicals, but recent synthetic biology advances have the potential to greatly expand and diversify their impact on biotechnology. In this review, we summarize the development of synthetic biological tools (including promoters and terminators) and enabling molecular genetics approaches that have been applied in these four promising alternative biomanufacturing platforms. An emphasis is placed on synthetic parts and genome editing tools. Finally, we discuss examples of synthetic tools developed in other organisms that can be adapted or optimized for these hosts in the near future. Copyright © 2015 Elsevier Inc. All rights reserved.
Fractional Programming for Communication Systems—Part I: Power Control and Beamforming
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
NASA Astrophysics Data System (ADS)
Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan
2018-06-01
This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.
Saridakis, Emmanuel; Chayen, Naomi E.
2003-01-01
A systematic approach for improving protein crystals by growing them in the metastable zone using the vapor diffusion technique is described. This is a simple technique for optimization of crystallization conditions. Screening around known conditions is performed to establish a working phase diagram for the crystallization of the protein. Dilutions of the crystallization drops across the supersolubility curve into the metastable zone are then carried out as follows: the coverslips holding the hanging drops are transferred, after being incubated for some time at conditions normally giving many small crystals, over reservoirs at concentrations which normally yield clear drops. Fewer, much larger crystals are obtained when the incubation times are optimized, compared with conventional crystallization at similar conditions. This systematic approach has led to the structure determination of the light-harvesting protein C-phycocyanin to the highest-ever resolution of 1.45 Å. PMID:12547801
TU-AB-303-01: A Feasibility Study for Dynamic Adaptive Therapy of Non-Small Cell Lung Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Phillips, M
2015-06-15
Purpose: To compare plans for NSCLC optimized using Dynamic Adaptive Therapy (DAT) with conventional IMRT optimization. DAT adapts plans based on changes in the target volume by using dynamic programing techniques to consider expected changes into the optimization process. Information gathered during treatment, e.g. from CBCT, is incorporated into the optimization. Methods and materials: DAT is formulated using stochastic control formalism, which minimizes the total expected number of tumor cells at the end of a treatment course subject to uncertainty inherent in the tumor response and organs-at-risk (OAR) dose constraints. This formulation allows for non-stationary dose distribution as well asmore » non-stationary fractional dose as needed to achieve a series of optimal plans that are conformal to tumor over time. Sixteen phantom cases with various sizes and locations of tumors, and OAR geometries were generated. Each case was planned with DAT and conventional IMRT (60Gy/30fx). Tumor volume change over time was obtained by using, daily MVCT-based, two-level cell population model. Monte Carlo simulations have been performed for each treatment course to account for uncertainty in tumor response. Same OAR dose constraints were applied for both methods. The frequency of plan modification was varied to 1, 2, 5 (weekly), and 29 (daily). The final average tumor dose and OAR doses have been compared to quantify the potential benefit of DAT. Results: The average tumor max, min, mean, and D95 resulted from DAT were 124.0–125.2%, 102.1–114.7%, 113.7–123.4%, and 102.0–115.9% (range dependent on the frequency of plan modification) of those from conventional IMRT. Cord max, esophagus max, lung mean, heart mean, and unspecified tissue D05 resulted from AT were 84–102.4%, 99.8–106.9%, 66.9–85.6%, 58.2–78.8%, and 85.2–94.0% of those from conventional IMRT. Conclusions: Significant tumor dose increase and OAR dose reduction, especially with parallel OAR with mean or dose-volume constraints, can be achieved using DAT.« less
NASA Astrophysics Data System (ADS)
Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping
2015-01-01
As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.
NASA Astrophysics Data System (ADS)
Giri Prasad, M. J.; Abhishek Raaj, A. S.; Rishi Kumar, R.; Gladson, Frank; M, Gautham
2016-09-01
The present study is concerned with resolving the problems pertaining to the conventional cutting fluids. Two samples of nano cutting fluids were prepared by dispersing 0.01 vol% of MWCNTs and a mixture of 0.01 vol% of MWCNTs and 0.01 vol% of nano ZnO in the soluble oil. The thermophysical properties such as the kinematic viscosity, density, flash point and the tribological properties of the prepared nano cutting fluid samples were experimentally investigated and were compared with those of plain soluble oil. In addition to this, a milling process was carried by varying the process parameters and by application of different samples of cutting fluids and an attempt was made to determine optimal cutting condition using the Taguchi optimization technique.
A variable-gain output feedback control design methodology
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.
1989-01-01
A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.
Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems
NASA Astrophysics Data System (ADS)
Yang, Le; Wang, Shuo; Feng, Jianghua
2017-11-01
Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.
Supercritical tests of a self-optimizing, variable-Camber wind tunnel model
NASA Technical Reports Server (NTRS)
Levinsky, E. S.; Palko, R. L.
1979-01-01
A testing procedure was used in a 16-foot Transonic Propulsion Wind Tunnel which leads to optimum wing airfoil sections without stopping the tunnel for model changes. Being experimental, the optimum shapes obtained incorporate various three-dimensional and nonlinear viscous and transonic effects not included in analytical optimization methods. The method is a closed-loop, computer-controlled, interactive procedure and employs a Self-Optimizing Flexible Technology wing semispan model that conformally adapts the airfoil section at two spanwise control stations to maximize or minimize various prescribed merit functions subject to both equality and inequality constraints. The model, which employed twelve independent hydraulic actuator systems and flexible skins, was also used for conventional testing. Although six of seven optimizations attempted were at least partially convergent, further improvements in model skin smoothness and hydraulic reliability are required to make the technique fully operational.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
NASA Astrophysics Data System (ADS)
Crane, D. T.
2011-05-01
High-power-density, segmented, thermoelectric (TE) elements have been intimately integrated into heat exchangers, eliminating many of the loss mechanisms of conventional TE assemblies, including the ceramic electrical isolation layer. Numerical models comprising simultaneously solved, nonlinear, energy balance equations have been created to simulate these novel architectures. Both steady-state and transient models have been created in a MATLAB/Simulink environment. The models predict data from experiments in various configurations and applications over a broad range of temperature, flow, and current conditions for power produced, efficiency, and a variety of other important outputs. Using the validated models, devices and systems are optimized using advanced multiparameter optimization techniques. Devices optimized for particular steady-state operating conditions can then be dynamically simulated in a transient operating model. The transient model can simulate a variety of operating conditions including automotive and truck drive cycles.
On processing development for fabrication of fiber reinforced composite, part 2
NASA Technical Reports Server (NTRS)
Hou, Tan-Hung; Hou, Gene J. W.; Sheen, Jeen S.
1989-01-01
Fiber-reinforced composite laminates are used in many aerospace and automobile applications. The magnitudes and durations of the cure temperature and the cure pressure applied during the curing process have significant consequences for the performance of the finished product. The objective of this study is to exploit the potential of applying the optimization technique to the cure cycle design. Using the compression molding of a filled polyester sheet molding compound (SMC) as an example, a unified Computer Aided Design (CAD) methodology, consisting of three uncoupled modules, (i.e., optimization, analysis and sensitivity calculations), is developed to systematically generate optimal cure cycle designs. Various optimization formulations for the cure cycle design are investigated. The uniformities in the distributions of the temperature and the degree with those resulting from conventional isothermal processing conditions with pre-warmed platens. Recommendations with regards to further research in the computerization of the cure cycle design are also addressed.
Adaptive optical microscope for brain imaging in vivo
NASA Astrophysics Data System (ADS)
Wang, Kai
2017-04-01
The optical heterogeneity of biological tissue imposes a major limitation to acquire detailed structural and functional information deep in the biological specimens using conventional microscopes. To restore optimal imaging performance, we developed an adaptive optical microscope based on direct wavefront sensing technique. This microscope can reliably measure and correct biological samples induced aberration. We demonstrated its performance and application in structural and functional brain imaging in various animal models, including fruit fly, zebrafish and mouse.
Application of dynamic programming to control khuzestan water resources system
Jamshidi, M.; Heidari, M.
1977-01-01
An approximate optimization technique based on discrete dynamic programming called discrete differential dynamic programming (DDDP), is employed to obtain the near optimal operation policies of a water resources system in the Khuzestan Province of Iran. The technique makes use of an initial nominal state trajectory for each state variable, and forms corridors around the trajectories. These corridors represent a set of subdomains of the entire feasible domain. Starting with such a set of nominal state trajectories, improvements in objective function are sought within the corridors formed around them. This leads to a set of new nominal trajectories upon which more improvements may be sought. Since optimization is confined to a set of subdomains, considerable savings in memory and computer time are achieved over that of conventional dynamic programming. The Kuzestan water resources system considered in this study is located in southwest Iran, and consists of two rivers, three reservoirs, three hydropower plants, and three irrigable areas. Data and cost benefit functions for the analysis were obtained either from the historical records or from similar studies. ?? 1977.
Linden, Ariel; Yarnold, Paul R
2016-12-01
Program evaluations often utilize various matching approaches to emulate the randomization process for group assignment in experimental studies. Typically, the matching strategy is implemented, and then covariate balance is assessed before estimating treatment effects. This paper introduces a novel analytic framework utilizing a machine learning algorithm called optimal discriminant analysis (ODA) for assessing covariate balance and estimating treatment effects, once the matching strategy has been implemented. This framework holds several key advantages over the conventional approach: application to any variable metric and number of groups; insensitivity to skewed data or outliers; and use of accuracy measures applicable to all prognostic analyses. Moreover, ODA accepts analytic weights, thereby extending the methodology to any study design where weights are used for covariate adjustment or more precise (differential) outcome measurement. One-to-one matching on the propensity score was used as the matching strategy. Covariate balance was assessed using standardized difference in means (conventional approach) and measures of classification accuracy (ODA). Treatment effects were estimated using ordinary least squares regression and ODA. Using empirical data, ODA produced results highly consistent with those obtained via the conventional methodology for assessing covariate balance and estimating treatment effects. When ODA is combined with matching techniques within a treatment effects framework, the results are consistent with conventional approaches. However, given that it provides additional dimensions and robustness to the analysis versus what can currently be achieved using conventional approaches, ODA offers an appealing alternative. © 2016 John Wiley & Sons, Ltd.
Venegas-Vega, Carlos A.; Zepeda, Luis M.; Garduño-Zarazúa, Luz M.; Berumen, Jaime; Kofman, Susana; Cervantes, Alicia
2013-01-01
The use of conventional cytogenetic techniques in combination with fluorescent in situ hybridization (FISH) and single-nucleotide polymorphism (SNP) microarrays is necessary for the identification of cryptic rearrangements in the diagnosis of chromosomal syndromes. We report two siblings, a boy of 9 years and 9 months of age and his 7-years- and 5-month-old sister, with the classic Wolf-Hirschhorn syndrome (WHS) phenotype. Using high-resolution GTG- and NOR-banding karyotypes, as well as FISH analysis, we characterized a pure 4p deletion in both sibs and a balanced rearrangement in their father, consisting in an insertion of 4p material within a nucleolar organizing region of chromosome 15. Copy number variant (CNV) analysis using SNP arrays showed that both siblings have a similar size of 4p deletion (~6.5 Mb). Our results strongly support the need for conventional cytogenetic and FISH analysis, as well as high-density microarray mapping for the optimal characterization of the genetic imbalance in patients with WHS; parents must always be studied for recognizing cryptic balanced chromosomal rearrangements for an adequate genetic counseling. PMID:23484094
NASA Astrophysics Data System (ADS)
Sharma, K.; Abdul Khudus, M. I. M.; Alam, S. U.; Bhattacharya, S.; Venkitesh, D.; Brambilla, G.
2018-01-01
Relative performance and detection limit of conventional, amplified, and gain-clamped cavity ring-down techniques (CRDT) in all-fiber configurations are compared experimentally for the first time. Refractive index measurement using evanescent field in tapered fibers is used as a benchmark for the comparison. The systematic optimization of a nested-loop configuration in gain-clamped CRDT is also discussed, which is crucial for achieving a constant gain in a CRDT experiment. It is found that even though conventional CRDT has the lowest standard error in ring-down time (Δτ), the value of ring-down time (τ) is very small, thus leading to poor detection limit. Amplified CRDT provides an improvement in τ, albeit with two orders of magnitude higher Δτ due to amplifier noise. The nested-loop configuration in gain-clamped CRDT helps in reducing Δτ by an order of magnitude as compared to amplified CRDT whilst retaining the improvement in τ. A detection limit of 1 . 03 × 10-4 RIU at refractive index of 1.322 with a 3 mm long and 4.5 μm diameter tapered fiber is demonstrated with the gain-clamped CRDT.
NASA Astrophysics Data System (ADS)
Nishikata, Daisuke; Ali, Mohammad Alimudin Bin Mohd; Hosoda, Kento; Matsumoto, Hiroshi; Nakamura, Kazuyuki
2018-04-01
A 36-bit × 32-entry fully digital ternary content addressable memory (TCAM) using the ratioless static random access memory (RL-SRAM) technology and fully complementary hierarchical-AND matching comparators (HAMCs) was developed. Since its fully complementary and digital operation enables the effect of device variabilities to be avoided, it can operate with a quite low supply voltage. A test chip incorporating a conventional TCAM and a proposed 24-transistor ratioless TCAM (RL-TCAM) cells and HAMCs was developed using a 0.18 µm CMOS process. The minimum operating voltage of 0.25 V of the developed RL-TCAM, which is less than half of that of the conventional TCAM, was measured via the conventional CMOS push–pull output buffers with the level-shifting and flipping technique using optimized pull-up voltage and resistors.
Tölle, Pia; Köhler, Christof; Marschall, Roland; Sharifi, Monir; Wark, Michael; Frauenheim, Thomas
2012-08-07
The conventional polymer electrolyte membrane (PEM) materials for fuel cell applications strongly rely on temperature and pressure conditions for optimal performance. In order to expand the range of operating conditions of these conventional PEM materials, mesoporous functionalised SiO(2) additives are developed. It has been demonstrated that these additives themselves achieve proton conductivities approaching those of conventional materials. However, the proton conduction mechanisms and especially factors influencing charge carrier mobility under different hydration conditions are not well known and difficult to separate from concentration effects in experiments. This tutorial review highlights contributions of atomistic computer simulations to the basic understanding and eventual design of these materials. Some basic introduction to the theoretical and computational framework is provided to introduce the reader to the field, the techniques are in principle applicable to a wide range of other situations as well. Simulation results are directly compared to experimental data as far as possible.
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Pergantis, Charles; Flanagan, David; Deschepper, Daniel
2009-01-01
The application of a noncontact air coupled acoustic heating technique is investigated for the inspection of advanced honeycomb composite structures. A weakness in the out of plane stiffness of the structure, caused by a delamination or core damage, allows for the coupling of acoustic energy and thus this area will have a higher temperature than the surrounding area. Air coupled acoustic thermography (ACAT) measurements were made on composite sandwich structures with damage and were compared to conventional flash thermography. A vibrating plate model is presented to predict the optimal acoustic source frequency. Improvements to the measurement technique are also discussed.
Cyclical Annealing Technique To Enhance Reliability of Amorphous Metal Oxide Thin Film Transistors.
Chen, Hong-Chih; Chang, Ting-Chang; Lai, Wei-Chih; Chen, Guan-Fu; Chen, Bo-Wei; Hung, Yu-Ju; Chang, Kuo-Jui; Cheng, Kai-Chung; Huang, Chen-Shuo; Chen, Kuo-Kuang; Lu, Hsueh-Hsing; Lin, Yu-Hsin
2018-02-26
This study introduces a cyclical annealing technique that enhances the reliability of amorphous indium-gallium-zinc-oxide (a-IGZO) via-type structure thin film transistors (TFTs). By utilizing this treatment, negative gate-bias illumination stress (NBIS)-induced instabilities can be effectively alleviated. The cyclical annealing provides several cooling steps, which are exothermic processes that can form stronger ionic bonds. An additional advantage is that the total annealing time is much shorter than when using conventional long-term annealing. With the use of cyclical annealing, the reliability of the a-IGZO can be effectively optimized, and the shorter process time can increase fabrication efficiency.
Park, Jong Kang; Rowlands, Christopher J; So, Peter T C
2017-01-01
Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice.
Park, Jong Kang; Rowlands, Christopher J.; So, Peter T. C.
2017-01-01
Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice. PMID:29387484
Optimizing transformations of stencil operations for parallel cache-based architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassetti, F.; Davis, K.
This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Harris, C; Alcock, A; Trefan, L; Nuttall, D; Evans, S T; Maguire, S; Kemp, A M
2018-02-01
Bruising is a common abusive injury in children, and it is standard practice to image and measure them, yet there is no current standard for measuring bruise size consistently. We aim to identify the optimal method of measuring photographic images of bruises, including computerised measurement techniques. 24 children aged <11 years (mean age of 6.9, range 2.5-10 years) with a bruise were recruited from the community. Demographics and bruise details were recorded. Each bruise was measured in vivo using a paper measuring tape. Standardised conventional and cross polarized digital images were obtained. The diameter of bruise images were measured by three computer aided measurement techniques: Image J (segmentation with Simple Interactive Object Extraction (maximum Feret diameter), 'Circular Selection Tool' (Circle diameter), & the Photoshop 'ruler' software (Photoshop diameter)). Inter and intra-observer effects were determined by two individuals repeating 11 electronic measurements, and relevant Intraclass Correlation Coefficient's (ICC's) were used to establish reliability. Spearman's rank correlation was used to compare in vivo with computerised measurements; a comparison of measurement techniques across imaging modalities was conducted using Kolmogorov-Smirnov tests. Significance was set at p < 0.05 for all tests. Images were available for 38 bruises in vivo, with 48 bruises visible on cross polarized imaging and 46 on conventional imaging (some bruises interpreted as being single in vivo appeared to be multiple in digital images). Correlation coefficients were >0.5 for all techniques, with maximum Feret diameter and maximum Photoshop diameter on conventional images having the strongest correlation with in vivo measurements. There were significant differences between in vivo and computer-aided measurements, but none between different computer-aided measurement techniques. Overall, computer aided measurements appeared larger than in vivo. Inter- and intra-observer agreement was high for all maximum diameter measurements (ICC's > 0.7). Whilst there are minimal differences between measurements of images obtained, the most consistent results were obtained when conventional images, segmented by Image J Software, were measured with a Feret diameter. This is therefore proposed as a standard for future research, and forensic practice, with the proviso that all computer aided measurements appear larger than in vivo. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
The application of artificial intelligence in the optimal design of mechanical systems
NASA Astrophysics Data System (ADS)
Poteralski, A.; Szczepanik, M.
2016-11-01
The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.
Mitigation of time-varying distortions in Nyquist-WDM systems using machine learning
NASA Astrophysics Data System (ADS)
Granada Torres, Jhon J.; Varughese, Siddharth; Thomas, Varghese A.; Chiuchiarelli, Andrea; Ralph, Stephen E.; Cárdenas Soto, Ana M.; Guerrero González, Neil
2017-11-01
We propose a machine learning-based nonsymmetrical demodulation technique relying on clustering to mitigate time-varying distortions derived from several impairments such as IQ imbalance, bias drift, phase noise and interchannel interference. Experimental results show that those impairments cause centroid movements in the received constellations seen in time-windows of 10k symbols in controlled scenarios. In our demodulation technique, the k-means algorithm iteratively identifies the cluster centroids in the constellation of the received symbols in short time windows by means of the optimization of decision thresholds for a minimum BER. We experimentally verified the effectiveness of this computationally efficient technique in multicarrier 16QAM Nyquist-WDM systems over 270 km links. Our nonsymmetrical demodulation technique outperforms the conventional QAM demodulation technique, reducing the OSNR requirement up to ∼0.8 dB at a BER of 1 × 10-2 for signals affected by interchannel interference.
2018-01-01
Sodium dodecyl sulfate electrophoresis (SDS) is a protein separation technique widely used, for example, prior to immunoblotting. Samples are usually prepared in a buffer containing both high concentrations of reducers and high concentrations of SDS. This conjunction renders the samples incompatible with common protein assays. By chelating the SDS, cyclodextrins make the use of simple, dye-based colorimetric assays possible. In this paper, we describe the optimization of the assay, focussing on the cyclodextrin/SDS ratio and the use of commercial assay reagents. The adaptation of the assay to a microplate format and using other detergent-containing conventional extraction buffers is also described. PMID:29641569
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
Sathishkumar, Thiyagarajan; Baskar, Ramakrishnan; Aravind, Mohan; Tilak, Suryanarayanan; Deepthi, Sri; Bharathikumar, Vellalore Maruthachalam
2013-01-01
Flavonoids are exploited as antioxidants, antimicrobial, antithrombogenic, antiviral, and antihypercholesterolemic agents. Normally, conventional extraction techniques like soxhlet or shake flask methods provide low yield of flavonoids with structural loss, and thereby, these techniques may be considered as inefficient. In this regard, an attempt was made to optimize the flavonoid extraction using orthogonal design of experiment and subsequent structural elucidation by high-performance liquid chromatography-diode array detector-electron spray ionization/mass spectrometry (HPLC-DAD-ESI/MS) techniques. The shake flask method of flavonoid extraction was observed to provide a yield of 1.2 ± 0.13 (mg/g tissue). With the two different solvents, namely, ethanol and ethyl acetate, tried for the extraction optimization of flavonoid, ethanol (80.1 mg/g tissue) has been proved better than ethyl acetate (20.5 mg/g tissue). The optimal conditions of the extraction of flavonoid were found to be 85°C, 3 hours with a material ratio of 1 : 20, 75% ethanol, and 1 cycle of extraction. About seven different phenolics like robinin, quercetin, rutin, sinapoyl-hexoside, dicaffeic acid, and two unknown compounds were identified for the first time in the flowers of T. heyneana. The study has also concluded that L16 orthogonal design of experiment is an effective method for the extraction of flavonoid than the shake flask method. PMID:25969771
Clinical Ion Beam Applications: Basic Properties, Application, Quality Control, Planning
NASA Astrophysics Data System (ADS)
Kraft, Gerhard
2009-03-01
Heavy-ion therapy using beam scanning and biological dose optimization is a novel technique of high-precision external radiotherapy. It yields a better perspective for tumor cure of radio-resistant tumors. However, heavy-ion therapy is not a general solution for all types of tumors. As compared to conventional radiotherapy, heavy-ion radiotherapy has the advantages of higher tumor dose, improved sparing of normal tissue in the entrance channel, a more precise concentration of the dose in the target volume with steeper gradients to the normal tissue, and a higher radiobiological effectiveness for tumors which are radio-resistant in conventional therapy. These properties make it possible to treat radio-resistant tumors with great success, including those in close vicinity to critical organs.
Wang, Junlong; Zhang, Ji; Wang, Xiaofang; Zhao, Baotang; Wu, Yiqian; Yao, Jian
2009-12-01
The conventional extraction methods for polysaccharides were time-consuming, laborious and energy-consuming. Microwave-assisted extraction (MAE) technique was employed for the extraction of Artemisia sphaerocephala polysaccharides (ASP), which is a traditional Chinese food. The extracting parameters were optimized by Box-Behnken design. In microwave heating process, a decrease in molecular weight (M(w)) was detected in SEC-LLS measurement. A d(f) value of 2.85 indicated ASP using MAE exhibited as a sphere conformation of branched clusters in aqueous solution. Furthermore, it showed stronger antioxidant activities compared with hot water extraction. The data obtained showed that the molecular weights played a more important role in antioxidant activities.
Optimal tracking and second order sliding power control of the DFIG wind turbine
NASA Astrophysics Data System (ADS)
Abdeddaim, S.; Betka, A.; Charrouf, O.
2017-02-01
In the present paper, an optimal operation of a grid-connected variable speed wind turbine equipped with a Doubly Fed Induction Generator (DFIG) is presented. The proposed cascaded nonlinear controller is designed to perform two main objectives. In the outer loop, a maximum power point tracking (MPPT) algorithm based on fuzzy logic theory is designed to permanently extract the optimal aerodynamic energy, whereas in the inner loop, a second order sliding mode control (2-SM) is applied to achieve smooth regulation of both stator active and reactive powers quantities. The obtained simulation results show a permanent track of the MPP point regardless of the turbine power-speed slope moreover the proposed sliding mode control strategy presents attractive features such as chattering-free, compared to the conventional first order sliding technique (1-SM).
NASA Astrophysics Data System (ADS)
Ebrahimi, Mehdi; Jahangirian, Alireza
2017-12-01
An efficient strategy is presented for global shape optimization of wing sections with a parallel genetic algorithm. Several computational techniques are applied to increase the convergence rate and the efficiency of the method. A variable fidelity computational evaluation method is applied in which the expensive Navier-Stokes flow solver is complemented by an inexpensive multi-layer perceptron neural network for the objective function evaluations. A population dispersion method that consists of two phases, of exploration and refinement, is developed to improve the convergence rate and the robustness of the genetic algorithm. Owing to the nature of the optimization problem, a parallel framework based on the master/slave approach is used. The outcomes indicate that the method is able to find the global optimum with significantly lower computational time in comparison to the conventional genetic algorithm.
Joda, Tim; Lenherr, Patrik; Dedem, Philipp; Kovaltschuk, Irina; Bragger, Urs; Zitzmann, Nicola U
2017-10-01
The aim of this randomized controlled trial was to analyze implant impression techniques applying intraoral scanning (IOS) and the conventional method according to time efficiency, difficulty, and operator's preference. One hundred participants (n = 100) with diverse levels of dental experience were included and randomly assigned to Group A performing digital scanning (TRIOS Pod) first or Group B conducting conventional impression (open tray with elastomer) first, while the second method was performed consecutively. A customized maxillary model with a bone-level-type implant in the right canine position (FDI-position 13) was mounted on a phantom training unit realizing a standardized situation for all participants. Outcome parameter was time efficiency, and potential influence of clinical experience, operator's perception of level of difficulty, applicability of each method, and subjective preferences were analyzed with Wilcoxon -Mann-Whitney and Kruskal-Wallis tests. Mean total work time varied between 5.01 ± 1.56 min (students) and 4.53 ± 1.34 min (dentists) for IOS, and between 12.03 ± 2.00 min (students) and 10.09 ± 1.15 min (dentists) for conventional impressions with significant differences between the two methods. Neither assignment to Group A or B, nor gender nor number of impression-taking procedures did influence working time. Difficulty and applicability of IOS was perceived more favorable compared to conventional impressions, and effectiveness of IOS was rated better by the majority of students (88%) and dentists (64%). While 76% of the students preferred IOS, 48% of the dentists were favoring conventional impressions, and 26% each IOS and either technique. For single-implant sites, the quadrant-like intraoral scanning (IOS) was more time efficient than the conventional full-arch impression technique in a phantom head simulating standardized optimal conditions. A high level of acceptance for IOS was observed among students and dentists. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Application of Layered Perforation Profile Control Technique to Low Permeable Reservoir
NASA Astrophysics Data System (ADS)
Wei, Sun
2018-01-01
it is difficult to satisfy the demand of profile control of complex well section and multi-layer reservoir by adopting the conventional profile control technology, therefore, a research is conducted on adjusting the injection production profile with layered perforating parameters optimization. i.e. in the case of coproduction for multi-layer, water absorption of each layer is adjusted by adjusting the perforating parameters, thus to balance the injection production profile of the whole well section, and ultimately enhance the oil displacement efficiency of water flooding. By applying the relationship between oil-water phase percolation theory/perforating damage and capacity, a mathematic model of adjusting the injection production profile with layered perforating parameters optimization, besides, perforating parameters optimization software is programmed. Different types of optimization design work are carried out according to different geological conditions and construction purposes by using the perforating optimization design software; furthermore, an application test is done for low permeable reservoir, and the water injection profile tends to be balanced significantly after perforation with optimized parameters, thereby getting a good application effect on site.
Trajectory Optimization of Electric Aircraft Subject to Subsystem Thermal Constraints
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Chin, Jeffrey C.; Schnulo, Sydney L.; Burt, Jonathan M.; Gray, Justin S.
2017-01-01
Electric aircraft pose a unique design challenge in that they lack a simple way to reject waste heat from the power train. While conventional aircraft reject most of their excess heat in the exhaust stream, for electric aircraft this is not an option. To examine the implications of this challenge on electric aircraft design and performance, we developed a model of the electric subsystems for the NASA X-57 electric testbed aircraft. We then coupled this model with a model of simple 2D aircraft dynamics and used a Legendre-Gauss-Lobatto collocation optimal control approach to find optimal trajectories for the aircraft with and without thermal constraints. The results show that the X-57 heat rejection systems are well designed for maximum-range and maximum-efficiency flight, without the need to deviate from an optimal trajectory. Stressing the thermal constraints by reducing the cooling capacity or requiring faster flight has a minimal impact on performance, as the trajectory optimization technique is able to find flight paths which honor the thermal constraints with relatively minor deviations from the nominal optimal trajectory.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Zhang, Zuchen; Song, Jingming; Wu, Chunxiao; Song, Ningfang
2015-03-01
A splicing parameter optimization method to increase the tensile strength of splicing joint between photonic crystal fiber (PCF) and conventional fiber is demonstrated. Based on the splicing recipes provided by splicer or fiber manufacturers, the optimal values of some major splicing parameters are obtained in sequence, and a conspicuous improvement in the mechanical strength of splicing joints between PCFs and conventional fibers is validated through experiments.
Demirel, Serdar; Attigah, Nicolas; Bruijnen, Hans; Ringleb, Peter; Eckstein, Hans-Henning; Fraedrich, Gustav; Böckler, Dittmar
2012-07-01
Carotid endarterectomy (CEA) is beneficial in patients with symptomatic carotid artery stenosis. However, randomized trials have not provided evidence concerning the optimal CEA technique, conventional or eversion. The outcome of 563 patients within the surgical randomization arm of the Stent-Protected Angioplasty versus Carotid Endarterectomy in Symptomatic Patients (SPACE-1) trial was analyzed by surgical technique subgroups: eversion endarterectomy versus conventional endarterectomy with patch angioplasty. The primary end point was ipsilateral stroke or death within 30 days after surgery. Secondary outcome events included perioperative adverse events and the 2-year risk of restenosis, stroke, and death. Both groups were similar in terms of demographic and other baseline clinical variables. Shunt frequency was higher in the conventional CEA group (65% versus 17%; P<0.0001). The risk of ipsilateral stroke or death within 30 days after surgery was significantly greater with eversion CEA (9% versus 3%; P=0.005). There were no statistically significant differences in the rate of perioperative secondary outcome events with the exception of a significantly higher risk of intraoperative ipsilateral stroke rate in the eversion CEA group (4% versus 0.3%; P=0.0035). The 2-year risk of ipsilateral stroke occurring after 30 days was significantly higher in the conventional CEA group (2.9% versus 0%; P=0.017). In patients with symptomatic carotid artery stenosis, conventional CEA appears to be associated with better periprocedural neurological outcome than eversion CEA. Eversion CEA, however, may be more effective for long-term prevention of ipsilateral stroke. These findings should be interpreted with caution noting the limitations of the post hoc, nonrandomized nature of the analysis.
A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.
2013-04-15
Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
INTERNAL LIMITING MEMBRANE PEELING IN MACULAR HOLE SURGERY; WHY, WHEN, AND HOW?
Chatziralli, Irini P; Theodossiadis, Panagiotis G; Steel, David H W
2018-05-01
To review the current rationale for internal limiting membrane (ILM) peeling in macular hole (MH) surgery and to discuss the evidence base behind why, when, and how surgeons peel the ILM. Review of the current literature. Pars plana vitrectomy is an effective treatment for idiopathic MH, and peeling of the ILM has been shown to improve closure rates and to prevent postoperative reopening. However, some authors argue against ILM peeling because it results in a number of changes in retinal structure and function and may not be necessary in all cases. Furthermore, the extent of ILM peeling optimally performed and the most favorable techniques to remove the ILM are uncertain. Several technique variations including ILM flaps, ILM scraping, and foveal sparing ILM peeling have been described as alternatives to conventional peeling in specific clinical scenarios. Internal limiting membrane peeling improves MH closure rates but can have several consequences on retinal structure and function. Adjuvants to aid peeling, instrumentation, technique, and experience may all alter the outcome. Hole size and other variables are important in assessing the requirement for peeling and potentially its extent. A variety of evolving alternatives to conventional peeling may improve outcomes and need further study.
Introduction of pre-etch deposition techniques in EUV patterning
NASA Astrophysics Data System (ADS)
Xiang, Xun; Beique, Genevieve; Sun, Lei; Labonte, Andre; Labelle, Catherine; Nagabhirava, Bhaskar; Friddle, Phil; Schmitz, Stefan; Goss, Michael; Metzler, Dominik; Arnold, John
2018-04-01
The thin nature of EUV (Extreme Ultraviolet) resist has posed significant challenges for etch processes. In particular, EUV patterning combined with conventional etch approaches suffers from loss of pattern fidelity in the form of line breaks. A typical conventional etch approach prevents the etch process from having sufficient resist margin to control the trench CD (Critical Dimension), minimize the LWR (Line Width Roughness), LER (Line Edge Roughness) and reduce the T2T (Tip-to-Tip). Pre-etch deposition increases the resist budget by adding additional material to the resist layer, thus enabling the etch process to explore a wider set of process parameters to achieve better pattern fidelity. Preliminary tests with pre-etch deposition resulted in blocked isolated trenches. In order to mitigate these effects, a cyclic deposition and etch technique is proposed. With optimization of deposition and etch cycle time as well as total number of cycles, it is possible to open the underlying layers with a beneficial over etch and simultaneously keep the isolated trenches open. This study compares the impact of no pre-etch deposition, one time deposition and cyclic deposition/etch techniques on 4 aspects: resist budget, isolated trench open, LWR/LER and T2T.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z
Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less
Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted
2016-09-16
distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and
Evaluation of phase-diversity techniques for solar-image restoration
NASA Technical Reports Server (NTRS)
Paxman, Richard G.; Seldin, John H.; Lofdahl, Mats G.; Scharmer, Goran B.; Keller, Christoph U.
1995-01-01
Phase-diversity techniques provide a novel observational method for overcomming the effects of turbulence and instrument-induced aberrations in ground-based astronomy. Two implementations of phase-diversity techniques that differ with regard to noise model, estimator, optimization algorithm, method of regularization, and treatment of edge effects are described. Reconstructions of solar granulation derived by applying these two implementations to common data sets are shown to yield nearly identical images. For both implementations, reconstructions from phase-diverse speckle data (involving multiple realizations of turbulence) are shown to be superior to those derived from conventional phase-diversity data (involving a single realization). Phase-diverse speckle reconstructions are shown to achieve near diffraction-limited resolution and are validated by internal and external consistency tests, including a comparison with a reconstruction using a well-accepted speckle-imaging method.
Zarejousheghani, Mashaalah; Fiedler, Petra; Möder, Monika; Borsdorf, Helko
2014-11-01
A novel approach for the selective extraction of organic target compounds from water samples has been developed using a mixed-bed solid phase extraction (mixed-bed SPE) technique. The molecularly imprinted polymer (MIP) particles are embedded in a network of silica gel to form a stable uniform porous bed. The capabilities of this method are demonstrated using atrazine as a model compound. In comparison to conventional molecularly imprinted-solid phase extraction (MISPE), the proposed mixed-bed MISPE method in combination with gas chromatography-mass spectrometry (GC-MS) analysis enables more reproducible and efficient extraction performance. After optimization of operational parameters (polymerization conditions, bed matrix ingredients, polymer to silica gel ratio, pH of the sample solution, breakthrough volume plus washing and elution conditions), improved LODs (1.34 µg L(-1) in comparison to 2.25 µg L(-1) obtained using MISPE) and limits of quantification (4.5 µg L(-1) for mixed-bed MISPE and 7.5 µg L(-1) for MISPE) were observed for the analysis of atrazine. Furthermore, the relative standard deviations (RSDs) for atrazine at concentrations between 5 and 200 µg L(-1) ranged between 1.8% and 6.3% compared to MISPE (3.5-12.1%). Additionally, the column-to-column reproducibility for the mixed-bed MISPE was significantly improved to 16.1%, compared with 53% that was observed for MISPE. Due to the reduced bed-mass sorbent and at optimized conditions, the total amount of organic solvents required for conditioning, washing and elution steps reduced from more than 25 mL for conventional MISPE to less than 2 mL for mixed-bed MISPE. Besides reduced organic solvent consumption, total sample preparation time of the mixed-bed MISPE method relative to the conventional MISPE was reduced from more than 20 min to less than 10 min. The amount of organic solvent required for complete elution diminished from 3 mL (conventional MISPE) to less than 0.4 mL with the mixed-bed technique shows its inherent potential for online operation with an analytical instrument. In order to evaluate the selectivity and matrix effects of the developed mixed-bed MISPE method, it was applied as an extraction technique for atrazine from environmental wastewater and river water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Shielded cables with optimal braided shields
NASA Astrophysics Data System (ADS)
Homann, E.
1991-01-01
Extensive tests were done in order to determine what factors govern the design of braids with good shielding effectiveness. The results are purely empirical and relate to the geometrical relationships between the braid parameters. The influence of various parameters on the shape of the transfer impedance versus frequency curve were investigated step by step. It was found that the optical coverage had been overestimated in the past. Good shielding effectiveness results not from high optical coverage as such, but from the proper type of coverage, which is a function of the braid angle and the element width. These dependences were measured for the ordinary range of braid angles (20 to 40 degrees). They apply to all plaiting machines and all gages of braid wire. The design rules are largely the same for bright, tinned, silver-plated and even lacquered copper wires. A new type of braid, which has marked advantages over the conventional design, was proposed. With the 'mixed-element' technique, an optimal braid design can be specified on any plaiting machine, for any possible cable diameter, and for any desired angle. This is not possible for the conventional type of braid.
New evaluation parameter for wearable thermoelectric generators
NASA Astrophysics Data System (ADS)
Wijethunge, Dimuthu; Kim, Woochul
2018-04-01
Wearable devices constitute a key application area for thermoelectric devices. However, owing to new constraints in wearable applications, a few conventional device optimization techniques are not appropriate and material evaluation parameters, such as figure of merit (zT) and power factor (PF), tend to be inadequate. We illustrated the incompleteness of zT and PF by performing simulations and considering different thermoelectric materials. The results indicate a weak correlation between device performance and zT and PF. In this study, we propose a new evaluation parameter, zTwearable, which is better suited for wearable applications compared to conventional zT. Owing to size restrictions, gap filler based device optimization is extremely critical in wearable devices. With respect to the occasions in which gap fillers are used, expressions for power, effective thermal conductivity (keff), and optimum load electrical ratio (mopt) are derived. According to the new parameters, the thermal conductivity of the material has become much more critical now. The proposed new evaluation parameter, namely, zTwearable, is extremely useful in the selection of an appropriate thermoelectric material among various candidates prior to the commencement of the actual design process.
Maximizing the potential of direct aperture optimization through collimator rotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milette, Marie-Pierre; Otto, Karl; Medical Physics, BC Cancer Agency-Vancouver Centre, Vancouver, British Columbia
Intensity-modulated radiation therapy (IMRT) treatment plans are conventionally produced by the optimization of fluence maps followed by a leaf sequencing step. An alternative to fluence based inverse planning is to optimize directly the leaf positions and field weights of multileaf collimator (MLC) apertures. This approach is typically referred to as direct aperture optimization (DAO). It has been shown that equivalent dose distributions may be generated that have substantially fewer monitor units (MU) and number of apertures compared to fluence based optimization techniques. Here we introduce a DAO technique with rotated apertures that we call rotating aperture optimization (RAO). The advantagesmore » of collimator rotation in IMRT have been shown previously and include higher fluence spatial resolution, increased flexibility in the generation of aperture shapes and less interleaf effects. We have tested our RAO algorithm on a complex C-shaped target, seven nasopharynx cancer recurrences, and one multitarget nasopharynx carcinoma patient. A study was performed in order to assess the capabilities of RAO as compared to fixed collimator angle DAO. The accuracy of fixed and rotated collimator aperture delivery was also verified. An analysis of the optimized treatment plans indicates that plans generated with RAO are as good as or better than DAO while maintaining a smaller number of apertures and MU than fluence based IMRT. Delivery verification results show that RAO is less sensitive to tongue and groove effects than DAO. Delivery time is currently increased due to the collimator rotation speed although this is a mechanical limitation that can be eliminated in the future.« less
A hot-cracking mitigation technique for welding high-strength aluminum alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y.P.; Dong, P.; Zhang, J.
2000-01-01
A hot-cracking mitigation technique for gas tungsten arc welding (GTAW) of high-strength aluminum alloy 2024 is presented. The proposed welding technique incorporates a trailing heat sink (an intense cooling source) with respect to the welding torch. The development of the mitigation technique was based on both detailed welding process simulation using advanced finite element techniques and systematic laboratory experiments. The finite element methods were used to investigate the detailed thermomechanical behavior of the weld metal that undergoes the brittle temperature range (BTR) during welding. As expected, a tensile deformation zone within the material BTR region was identified behind the weldmore » pool under conventional GTA welding process conventional GTA welding process conditions for the aluminum alloy studied. To mitigate hot cracking, the tensile zone behind the weld pool must be eliminated or reduce to a satisfactory level if the weld metal hot ductility cannot be further improved. With detailed computational modeling, it was found that by the introduction of a trailing heat sink at some distance behind the welding arc, the tensile strain rate with respect to temperature in the zone encompassing the BTR region can be significantly reduced. A series of parametric studies were also conducted to derive optimal process parameters for the trailing heat sink. The experimental results confirmed the effectiveness of the trailing heat sink technique. With a proper implementation of the trailing heat sink method, hot cracking can be completely eliminated in welding aluminum alloy 2024 (AA 2024).« less
Aspergillopeptidase B, an alkaline protease in A , oryzae extracts, was obtained in highly purified form by conventional fractionation techniques. The...enzyme is a compact protein of 17,900 m.w. with a neutral isoelectric point. It contains no S-containing amino acids, phosphorus or metal ions. It is...composed of a single polypeptide chain with N-terminal glycine and C-terminal alanine residues. The protease activity toward casein is optimal at pH
Dynamic single sideband modulation for realizing parametric loudspeaker
NASA Astrophysics Data System (ADS)
Sakai, Shinichi; Kamakura, Tomoo
2008-06-01
A parametric loudspeaker, that presents remarkably narrow directivity compared with a conventional loudspeaker, is newly produced and examined. To work the loudspeaker optimally, we prototyped digitally a single sideband modulator based on the Weaver method and appropriate signal processing. The processing techniques are to change the carrier amplitude dynamically depending on the envelope of audio signals, and then to operate the square root or fourth root to the carrier amplitude for improving input-output acoustic linearity. The usefulness of the present modulation scheme has been verified experimentally.
Saito, Masatoshi
2009-08-01
Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
Cache Energy Optimization Techniques For Modern Processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh
2013-01-01
Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In thismore » book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both newcomers and veterans in the field of cache power management. It will help graduate students, CAD tool developers and designers in understanding the need of energy efficiency in modern computing systems. Further, it will be useful for researchers in gaining insights into algorithms and techniques for micro-architectural and system-level energy optimization using dynamic cache reconfiguration. We sincerely believe that the ``food for thought'' presented in this book will inspire the readers to develop even better ideas for designing ``green'' processors of tomorrow.« less
NASA Technical Reports Server (NTRS)
Hedges, L. M. (Editor)
1973-01-01
Detailed data are presented on phenolic-glass and phenolic-asbestos compounds which compare the effect of compression molding without degas to the effects of four variations of compression molding. These variations were designed to improve elimination of entrapped volatiles and the volatile products of the condensate reaction associated with the cure of phenolic resins. The utilization of conventional methods of degas plus degas by vacuum and directional heat flow methods are involved. Detailed data are also presented on these same compounds, comparing the effect of changes in post-bake time, and post-bake temperature for the five molding techniques.
Secure positioning technique based on encrypted visible light map for smart indoor service
NASA Astrophysics Data System (ADS)
Lee, Yong Up; Jung, Gillyoung
2018-03-01
Indoor visible light (VL) positioning systems for smart indoor services are negatively affected by both cochannel interference from adjacent light sources and VL reception position irregularity in the three-dimensional (3-D) VL channel. A secure positioning methodology based on a two-dimensional (2-D) encrypted VL map is proposed, implemented in prototypes of the specific positioning system, and analyzed based on performance tests. The proposed positioning technique enhances the positioning performance by more than 21.7% compared to the conventional method in real VL positioning tests. Further, the pseudonoise code is found to be the optimal encryption key for secure VL positioning for this smart indoor service.
Design and manufacturing of the CFRP lightweight telescope structure
NASA Astrophysics Data System (ADS)
Stoeffler, Guenter; Kaindl, Rainer
2000-06-01
Design of earthbound telescopes is normally based on conventional steel constructions. Several years ago thermostable CFRP Telescope and reflector structures were developed and manufacturing for harsh terrestrial environments. The airborne SOFIA TA requires beyond thermostability an excessive stiffness to mass ratio for the structure fulfilling performance and not to exceed mass limitations by the aircraft Boeing 747 SP. Additional integration into A/C drives design of structure subassemblies. Thickness of CFRP Laminates, either filament wound or prepreg manufactured need special attention and techniques to gain high material quality according to aerospace requirements. Sequential shop assembly of the structure subassemblies minimizes risk for assembling TA. Design goals, optimization of layout and manufacturing techniques and results are presented.
SLAM examination of solar cells and solar cell welds. [Scanning Laser Acoustic Microscope
NASA Technical Reports Server (NTRS)
Stella, P. M.; Vorres, C. L.; Yuhas, D. E.
1981-01-01
The scanning laser acoustic microscope (SLAM) has been evaluated for non-destructive examination of solar cells and interconnector bonds. Using this technique, it is possible to view through materials in order to reveal regions of discontinuity such as microcracks and voids. Of particular interest is the ability to evaluate, in a unique manner, the bonds produced by parallel gap welding. It is possible to not only determine the area and geometry of the bond between the tab and cell, but also to reveal any microcracks incurred during the welding. By correlating the SLAM results with conventional techniques of weld evaluation a more confident weld parameter optimization can be obtained.
Ötvös, Sándor B; Mándity, István M; Fülöp, Ferenc
2011-08-01
A simple and efficient flow-based technique is reported for the catalytic deuteration of several model nitrogen-containing heterocyclic compounds which are important building blocks of pharmacologically active materials. A continuous flow reactor was used in combination with on-demand pressure-controlled electrolytic D(2) production. The D(2) source was D(2)O, the consumption of which was very low. The experimental set-up allows the fine-tuning of pressure, temperature, and flow rate so as to determine the optimal conditions for the deuteration reactions. The described procedure lacks most of the drawbacks of the conventional batch deuteration techniques, and additionally is highly selective and reproducible.
Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L
2018-01-01
An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
Optimal Design of an Automotive Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fagehi, Hassan; Attar, Alaa; Lee, Hosung
2018-07-01
The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan Pengfei; Takahashi, Eiji J.; Midorikawa, Katsumi
2010-11-15
We present the optimization of the two-color synthesis method for generating an intense isolated attosecond pulse (IAP) in the multicycle regime. By mixing an infrared assistant pulse with a Ti:sapphire main pulse, we show that an IAP can be produced using a multicycle two-color pulse with a duration longer than 30 fs. We also discuss the influence of the carrier-envelope phase (CEP) and the relative intensity on the generation of IAPs. By optimizing the wavelength of the assistant field, IAP generation becomes insensitive to the CEP slip. Therefore, the optimized two-color method enables us to relax the requirements of pulsemore » duration and easily produce the IAP with a conventional multicycle laser pulse. In addition, it enables us to markedly suppress the ionization of the harmonic medium. This is a major advantage for efficiently generating intense IAPs from a neutral medium by applying the appropriate phase-matching and energy-scaling techniques.« less
Optimal Design of an Automotive Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fagehi, Hassan; Attar, Alaa; Lee, Hosung
2018-04-01
The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).
NASA Astrophysics Data System (ADS)
Darvishvand, Leila; Kamkari, Babak; Kowsary, Farshad
2018-03-01
In this article, a new hybrid method based on the combination of the genetic algorithm (GA) and artificial neural network (ANN) is developed to optimize the design of three-dimensional (3-D) radiant furnaces. A 3-D irregular shape design body (DB) heated inside a 3-D radiant furnace is considered as a case study. The uniform thermal conditions on the DB surfaces are obtained by minimizing an objective function. An ANN is developed to predict the objective function value which is trained through the data produced by applying the Monte Carlo method. The trained ANN is used in conjunction with the GA to find the optimal design variables. The results show that the computational time using the GA-ANN approach is significantly less than that of the conventional method. It is concluded that the integration of the ANN with GA is an efficient technique for optimization of the radiant furnaces.
SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.
Lee, Hyunyeol; Park, Jaeseok
2013-07-01
Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.
Zhang, Guozhu; Xie, Changsheng; Zhang, Shunping; Zhao, Jianwei; Lei, Tao; Zeng, Dawen
2014-09-08
A combinatorial high-throughput temperature-programmed method to obtain the optimal operating temperature (OOT) of gas sensor materials is demonstrated here for the first time. A material library consisting of SnO2, ZnO, WO3, and In2O3 sensor films was fabricated by screen printing. Temperature-dependent conductivity curves were obtained by scanning this gas sensor library from 300 to 700 K in different atmospheres (dry air, formaldehyde, carbon monoxide, nitrogen dioxide, toluene and ammonia), giving the OOT of each sensor formulation as a function of the carrier and analyte gases. A comparative study of the temperature-programmed method and a conventional method showed good agreement in measured OOT.
Design of Life Extending Controls Using Nonlinear Parameter Optimization
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok
1998-01-01
This report presents the conceptual development of a life extending control system where the objective is to achieve high performance and structural durability of the plant. A life extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel and oxidizer turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. This design approach makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life extending controller module to augment a conventional performance controller of a rocket engine. The nonlinear aspect of the design is achieved using nonlinear parameter optimization of a prescribed control structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palazzi, Mauro; Orlandi, Ester; Bossi, Paolo
2009-07-01
Purpose: To report the outcome of a consecutive series of patients with nonmetastatic nasopharyngeal carcinoma (NPC), focusing on the impact of treatment-related factors. Methods and Materials: Between 2000 and 2006, 87 patients with NPC were treated with either conventional (two- or three-dimensional) radiotherapy (RT) or with intensity-modulated RT (IMRT). Of these patients, 81 (93%) received either concomitant CHT (24%) or both induction and concomitant chemotherapy (CHT) (69%). Stage was III in 36% and IV in 39% of patients. Outcomes in this study population were compared with those in the previous series of 171 patients treated during 1990 to 1999. Results:more » With a median follow-up of 46 months, actuarial rates at 3 years were the following: local control, 96%; local-regional control, 93%; distant control (DC), 90%; disease-free survival (DFS), 82%; overall survival, 90%. In Stage III to IV patients, distant control at 3 years was 56% in patients treated with concomitant CHT only and 92% in patients treated with both induction and concomitant CHT (p = 0.014). At multivariate analysis, histology, N-stage, RT technique, and total RT dose had the strongest independent impact on DFS (p < 0.05). Induction CHT had a borderline effect on DC (p = 0.07). Most dosimetric statistics were improved in the group of patients treated with IMRT compared with conventional 3D technique. All outcome endpoints were substantially better in the study population compared with those in the previous series. Conclusions: Outcome of NPC has further improved in the study period compared with the previous decade, with a significant effect of RT technique optimization. The impact of induction CHT remains to be demonstrated in controlled trials.« less
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
An optimal general type-2 fuzzy controller for Urban Traffic Network.
Khooban, Mohammad Hassan; Vafamand, Navid; Liaghat, Alireza; Dragicevic, Tomislav
2017-01-01
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the Traffic Information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn
2017-06-01
To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.
Reddy, S Srikanth; Revathi, Kakkirala; Reddy, S Kranthikumar
2013-01-01
Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data
Luo, Gang; Frey, Lewis J.
2017-01-01
Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems. PMID:25608318
Multilayer mounting enables long-term imaging of zebrafish development in a light sheet microscope.
Kaufmann, Anna; Mickoleit, Michaela; Weber, Michael; Huisken, Jan
2012-09-01
Light sheet microscopy techniques, such as selective plane illumination microscopy (SPIM), are ideally suited for time-lapse imaging of developmental processes lasting several hours to a few days. The success of this promising technology has mainly been limited by the lack of suitable techniques for mounting fragile samples. Embedding zebrafish embryos in agarose, which is common in conventional confocal microscopy, has resulted in severe growth defects and unreliable results. In this study, we systematically quantified the viability and mobility of zebrafish embryos mounted under more suitable conditions. We found that tubes made of fluorinated ethylene propylene (FEP) filled with low concentrations of agarose or methylcellulose provided an optimal balance between sufficient confinement of the living embryo in a physiological environment over 3 days and optical clarity suitable for fluorescence imaging. We also compared the effect of different concentrations of Tricaine on the development of zebrafish and provide guidelines for its optimal use depending on the application. Our results will make light sheet microscopy techniques applicable to more fields of developmental biology, in particular the multiview long-term imaging of zebrafish embryos and other small organisms. Furthermore, the refinement of sample preparation for in toto and in vivo imaging will promote other emerging optical imaging techniques, such as optical projection tomography (OPT).
NASA Astrophysics Data System (ADS)
Alabdulkarem, Abdullah
Liquefied natural gas (LNG) plants are energy intensive. As a result, the power plants operating these LNG plants emit high amounts of CO2 . To mitigate global warming that is caused by the increase in atmospheric CO2, CO2 capture and sequestration (CCS) using amine absorption is proposed. However, the major challenge of implementing this CCS system is the associated power requirement, increasing power consumption by about 15--25%. Therefore, the main scope of this work is to tackle this challenge by minimizing CCS power consumption as well as that of the entire LNG plant though system integration and rigorous optimization. The power consumption of the LNG plant was reduced through improving the process of liquefaction itself. In this work, a genetic algorithm (GA) was used to optimize a propane pre-cooled mixed-refrigerant (C3-MR) LNG plant modeled using HYSYS software. An optimization platform coupling Matlab with HYSYS was developed. New refrigerant mixtures were found, with savings in power consumption as high as 13%. LNG plants optimization with variable natural gas feed compositions was addressed and the solution was proposed through applying robust optimization techniques, resulting in a robust refrigerant which can liquefy a range of natural gas feeds. The second approach for reducing the power consumption is through process integration and waste heat utilization in the integrated CCS system. Four waste heat sources and six potential uses were uncovered and evaluated using HYSYS software. The developed models were verified against experimental data from the literature with good agreement. Net available power enhancement in one of the proposed CCS configuration is 16% more than the conventional CCS configuration. To reduce the CO2 pressurization power into a well for enhanced oil recovery (EOR) applications, five CO2 pressurization methods were explored. New CO2 liquefaction cycles were developed and modeled using HYSYS software. One of the developed liquefaction cycles using NH3 as a refrigerant resulted in 5% less power consumption than the conventional multi-stage compression cycle. Finally, a new concept of providing the CO2 regeneration heat is proposed. The proposed concept is using a heat pump to provide the regeneration heat as well as process heat and CO2 liquefaction heat. Seven configurations of heat pumps integrated with CCS were developed. One of the heat pumps consumes 24% less power than the conventional system or 59% less total equivalent power demand than the conventional system with steam extraction and CO2 compression.
Hodgson, Jenny A; Kunin, William E; Thomas, Chris D; Benton, Tim G; Gabriel, Doreen
2010-11-01
Organic farming aims to be wildlife-friendly, but it may not benefit wildlife overall if much greater areas are needed to produce a given quantity of food. We measured the density and species richness of butterflies on organic farms, conventional farms and grassland nature reserves in 16 landscapes. Organic farms supported a higher density of butterflies than conventional farms, but a lower density than reserves. Using our data, we predict the optimal land-use strategy to maintain yield whilst maximizing butterfly abundance under different scenarios. Farming conventionally and sparing land as nature reserves is better for butterflies when the organic yield per hectare falls below 87% of conventional yield. However, if the spared land is simply extra field margins, organic farming is optimal whenever organic yields are over 35% of conventional yields. The optimal balance of land sparing and wildlife-friendly farming to maintain production and biodiversity will differ between landscapes. © 2010 Blackwell Publishing Ltd/CNRS.
A simple blind placement of the left-sided double-lumen tubes.
Zong, Zhi Jun; Shen, Qi Ying; Lu, Yao; Li, Yuan Hai
2016-11-01
One-lung ventilation (OLV) has been commonly provided by using a double-lumen tube (DLT). Previous reports have indicated the high incidence of inappropriate DLT positioning in conventional maneuvers.After obtaining approval from the medical ethics committee of First Affiliated Hospital of Anhui Medical University and written consent from patients, 88 adult patients belonging to American society of anesthesiologists (ASA) physical status grade I or II, and undergoing elective thoracic surgery requiring a left-side DLT for OLV were enrolled in this prospective, single-blind, randomized controlled study. Patients were randomly allocated to 1 of 2 groups: simple maneuver group or conventional maneuver group. The simple maneuver is a method that relies on partially inflating the bronchial balloon and recreating the effect of a carinal hook on the DLTs to give an idea of orientation and depth. After the induction of anesthesia the patients were intubated with a left-sided Robertshaw DLT using one of the 2 intubation techniques. After intubation of each DLT, an anesthesiologist used flexible bronchoscopy to evaluate the patient while the patient lay in a supine position. The number of optimal position and the time required to place DLT in correct position were recorded.Time for the intubation of DLT took 100 ± 16.2 seconds (mean ± SD) in simple maneuver group and 95.1 ± 20.8 seconds in conventional maneuver group. The difference was not statistically significant (P = 0.221). Time for fiberoptic bronchoscope (FOB) took 22 ± 4.8 seconds in simple maneuver group and was statistically faster than that in conventional maneuver group (43.6 ± 23.7 seconds, P < 0.001). Nearly 98% of the 44 intubations in simple maneuver group were considered as in optimal position while only 52% of the 44 intubations in conventional maneuver group were in optimal position, and the difference was statistically significant (P < 0.001).This simple maneuver is more rapid and more accurate to position left-sided DLTs, it may be substituted for FOB during positioning of a left-sided DLT in condition that FOB is unavailable or inapplicable.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Sulzinski, Michael A; Wasilewski, Melissa A; Farrell, James C; Glick, David L
2009-07-01
It is an extraordinary challenge to offer an undergraduate laboratory course in virology that teaches hands-on, relevant molecular biology techniques using nonpathogenic models of human virus detection. To our knowledge, there exists no inexpensive kits or reagent sets that are appropriate for demonstrating real-time PCR (RT-PCR) in an undergraduate laboratory course in virology. Here we describe simple procedures for student exercises that demonstrate the PCR detection of an HIV target nucleic acid. Our procedures combine a commercially available kit for conventional PCR with a modification for RT-PCR using the same reagents in the kit, making it possible for an instructor with access to a LightCycler® instrument to implement a relevant student exercise on RT-PCR detection of HIV nucleic acid targets. This combination of techniques is useful for demonstrating and comparing conventional PCR amplification and detection with agarose gel electrophoresis, with real-time PCR over a series of three laboratory periods. The series of laboratory periods also is used to provide the foundation for teaching the concept of PCR primer design, optimization of PCR detection systems, and introduction to nucleic acid queries using NCBI-BLAST to find and identify primers, amplicons, and other potential amplification targets within the HIV viral genome. The techniques were successfully implemented at the Biology 364 undergraduate virology course at the University of Scranton during the Fall 2008 semester. The techniques are particularly targeted to students who intend to pursue either postgraduate technical employment or graduate studies in the molecular life sciences. Copyright © 2009 International Union of Biochemistry and Molecular Biology, Inc.
The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization
NASA Astrophysics Data System (ADS)
Harper, Colby; Stephenson, Gary
2007-01-01
The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.
Optimizing gene transfer to conventional outflow cells in living mouse eyes
Li, G; Gonzalez, P; Camras, LJ; Navarro, I; Qiu, J; Challa, P; Stamer, WD
2013-01-01
The mouse eye has physiological and genetic advantages to study conventional outflow function. However, its small size and shallow anterior chamber presents technical challenges to efficient intracameral delivery of genetic material to conventional outflow cells. The goal of this study was to optimize methods to overcome this technical hurdle, without damaging ocular structures or compromising outflow function. Gene targeting was monitored by immunofluorescence microscopy after transduction of adenovirus encoding green fluorescent protein driven by a CMV promoter. Guided by a micromanipulator and stereomicroscope, virus was delivered intracamerally to anesthetized mice by bolus injection using 33 gauge needle attached to Hamilton syringe or infusion with glass micropipette connected to syringe pump. The total number of particles introduced remained constant, while volume of injected virus solution (3–10 µl) was varied for each method and time of infusion (3–40 min) tested. Outflow facility and intraocular pressure were monitored invasively using established techniques. Unlike bolus injections or slow infusions, introduction of virus intracamerally during rapid infusions (3 min) at any volume tested preferentially targeted trabecular meshwork and Schlemm's canal cells, with minimal transduction of neighboring cells. While infusions resulted in transient intraocular pressure spikes (commensurate with volume infused, Δ40–70 mmHg), eyes typically recovered within 60 minutes. Transduced eyes displayed normal outflow facility and tissue morphology 3–6 days after infusions. Taken together, fast infusion of virus solution in small volumes intracamerally is a novel and effective method to selectively deliver agents to conventional outflow cells in living mice. PMID:23337742
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
NASA Technical Reports Server (NTRS)
Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.
2006-01-01
Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors may not provide adequate coverage of the system. This paper also discusses an optimal technique to place and validate sensors.
Advances in Neutron Radiography: Application to Additive Manufacturing Inconel 718
Bilheux, Hassina Z; Song, Gian; An, Ke; ...
2016-01-01
Reactor-based neutron radiography is a non-destructive, non-invasive characterization technique that has been extensively used for engineering materials such as inspection of components, evaluation of porosity, and in-operando observations of engineering parts. Neutron radiography has flourished at reactor facilities for more than four decades and is relatively new to accelerator-based neutron sources. Recent advances in neutron source and detector technologies, such as the Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, TN, and the microchannel plate (MCP) detector, respectively, enable new contrast mechanisms using the neutron scattering Bragg features for crystalline information such as averagemore » lattice strain, crystalline plane orientation, and identification of phases in a neutron radiograph. Additive manufacturing (AM) processes or 3D printing have recently become very popular and have a significant potential to revolutionize the manufacturing of materials by enabling new designs with complex geometries that are not feasible using conventional manufacturing processes. However, the technique lacks standards for process optimization and control compared to conventional processes. Residual stresses are a common occurrence in materials that are machined, rolled, heat treated, welded, etc., and have a significant impact on a component s mechanical behavior and durability. They may also arise during the 3D printing process, and defects such as internal cracks can propagate over time as the component relaxes after being removed from its build plate (the base plate utilized to print materials on). Moreover, since access to the AM material is possible only after the component has been fully manufactured, it is difficult to characterize the material for defects a priori to minimize expensive re-runs. Currently, validation of the AM process and materials is mainly through expensive trial-and-error experiments at the component level, whereas in conventional processes the level of confidence in predictive computational modeling is high enough to allow process and materials optimization through computational approaches. Thus, there is a clear need for non-destructive characterization techniques and for the establishment of processing- microstructure databases that can be used for developing and validating predictive modeling tools for AM.« less
Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.
2018-01-01
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673
Aerodynamic shape optimization using control theory
NASA Technical Reports Server (NTRS)
Reuther, James
1996-01-01
Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.
Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
Photoacoustic spectroscopic studies of polycyclic aromatic hydrocarbons
NASA Astrophysics Data System (ADS)
Zaidi, Zahid H.; Kumar, Pardeep; Garg, R. K.
1999-02-01
Because of their involvement in environmental pollutants, in carcinogenic activity, plastics, pharmaceuticals, synthesis of some laser dyes and presence in interstellar space etc., Polycyclic aromatic hydrocarbons (PAHs) are important. As their structure and properties can be varied systematically, they form a beautiful class of molecules for experimental and quantum chemical investigations. These molecules are being studied for last several years by using conventional spectroscopy. In recent years, Photoacoustic (PA) spectroscopy has emerged as a new non-destructive technique with unique capability and sensitivity. The PA effect is the process of generation of acoustic waves in a sample resulting from the absorption of photons. This technique not only reveals non- radiative transitions but also provides information about forbidden singlet-triplet transitions which are not observed normally by the conventional spectroscopy. The present paper deals with the spectroscopic studies of some PAH molecules by PA spectroscopy in the region 250 - 400 nm. The CNDO/S-CI method is used to calculate the electronic transitions with the optimized geometries. A good agreement is found between the experimental and calculated results.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Active model-based balancing strategy for self-reconfigurable batteries
NASA Astrophysics Data System (ADS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2016-08-01
This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.
Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
2001-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, P; Xing, L; Ma, L
Purpose: Radiosurgery of multiple (n>4) brain metastasis lesions requires 3–4 noncoplanar VMAT arcs with excessively high monitor units and long delivery time. We investigated whether an improved optimization technique would decrease the needed arc numbers and increase the delivery efficiency, while improving or maintaining the plan quality. Methods: The proposed 4pi arc space optimization algorithm consists of two steps: automatic couch angle selection followed by aperture generation for each arc with optimized control points distribution. We use a greedy algorithm to select the couch angles. Starting from a single coplanar arc plan we search through the candidate noncoplanar arcs tomore » pick a single noncoplanar arc that will bring the best plan quality when added into the existing treatment plan. Each time, only one additional noncoplanar arc is considered making the calculation time tractable. This process repeats itself until desired number of arc is reached. The technique is first evaluated in coplanar arc delivery scheme with testing cases and then applied to noncoplanar treatments of a case with 12 brain metastasis lesions. Results: Clinically acceptable plans are created within minutes. For the coplanar testing cases the algorithm yields singlearc plans with better dose distributions than that of two-arc VMAT, simultaneously with a 12–17% reduction in the delivery time and a 14–21% reduction in MUs. For the treatment of 12 brain mets while Paddick conformity indexes of the two plans were comparable the SCG-optimization with 2 arcs (1 noncoplanar and 1 coplanar) significantly improved the conventional VMAT with 3 arcs (2 noncoplanar and 1 coplanar). Specifically V16 V10 and V5 of the brain were reduced by 11%, 11% and 12% respectively. The beam delivery time was shortened by approximately 30%. Conclusion: The proposed 4pi arc space optimization technique promises to significantly reduce the brain toxicity while greatly improving the treatment efficiency.« less
Bowman, Wesley A; Robar, James L; Sattarivand, Mike
2017-03-01
Stereoscopic x-ray image guided radiotherapy for lung tumors is often hindered by bone overlap and limited soft-tissue contrast. This study aims to evaluate the feasibility of dual-energy imaging techniques and to optimize parameters of the ExacTrac stereoscopic imaging system to enhance soft-tissue imaging for application to lung stereotactic body radiation therapy. Simulated spectra and a physical lung phantom were used to optimize filter material, thickness, tube potentials, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number range (3-83) based on a metric defined to separate spectra of high and low-energies. Both energies used the same filter due to time constraints of imaging in the presence of respiratory motion. The lung phantom contained bone, soft tissue, and tumor mimicking materials, and it was imaged with a filter thickness in the range of (0-0.7) mm and a kVp range of (60-80) for low energy and (120,140) for high energy. Optimal dual-energy weighting factors were obtained when the bone to soft-tissue contrast-to-noise ratio (CNR) was minimized. Optimal filter thickness and tube potential were achieved by maximizing tumor-to-background CNR. Using the optimized parameters, dual-energy images of an anthropomorphic Rando phantom with a spherical tumor mimicking material inserted in his lung were acquired and evaluated for bone subtraction and tumor contrast. Imaging dose was measured using the dual-energy technique with and without beam filtration and matched to that of a clinical conventional single energy technique. Tin was the material of choice for beam filtering providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-weighted image in the lung phantom was obtained using 0.2 mm tin and (140, 60) kVp pair. Dual-energy images of the Rando phantom with the tin filter had noticeable improvement in bone elimination, tumor contrast, and noise content when compared to dual-energy imaging with no filtration. The surface dose was 0.52 mGy per each stereoscopic view for both clinical single energy technique and the dual-energy technique in both cases of with and without the tin filter. Dual-energy soft-tissue imaging is feasible without additional imaging dose using the ExacTrac stereoscopic imaging system with optimized acquisition parameters and no beam filtration. Addition of a single tin filter for both the high and low energies has noticeable improvements on dual-energy imaging with optimized parameters. Clinical implementation of a dual-energy technique on ExacTrac stereoscopic imaging could improve lung tumor visibility. © 2017 American Association of Physicists in Medicine.
Intensity modulated neutron radiotherapy optimization by photon proxy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Michael; Hammoud, Ahmad; Bossenberger, Todd
2012-08-15
Purpose: Introducing intensity modulation into neutron radiotherapy (IMNRT) planning has the potential to mitigate some normal tissue complications seen in past neutron trials. While the hardware to deliver IMNRT plans has been in use for several years, until recently the IMNRT planning process has been cumbersome and of lower fidelity than conventional photon plans. Our in-house planning system used to calculate neutron therapy plans allows beam weight optimization of forward planned segments, but does not provide inverse optimization capabilities. Commercial treatment planning systems provide inverse optimization capabilities, but currently cannot model our neutron beam. Methods: We have developed a methodologymore » and software suite to make use of the robust optimization in our commercial planning system while still using our in-house planning system to calculate final neutron dose distributions. Optimized multileaf collimator (MLC) leaf positions for segments designed in the commercial system using a 4 MV photon proxy beam are translated into static neutron ports that can be represented within our in-house treatment planning system. The true neutron dose distribution is calculated in the in-house system and then exported back through the MATLAB software into the commercial treatment planning system for evaluation. Results: The planning process produces optimized IMNRT plans that reduce dose to normal tissue structures as compared to 3D conformal plans using static MLC apertures. The process involves standard planning techniques using a commercially available treatment planning system, and is not significantly more complex than conventional IMRT planning. Using a photon proxy in a commercial optimization algorithm produces IMNRT plans that are more conformal than those previously designed at our center and take much less time to create. Conclusions: The planning process presented here allows for the optimization of IMNRT plans by a commercial treatment planning optimization algorithm, potentially allowing IMNRT to achieve similar conformality in treatment as photon IMRT. The only remaining requirements for the delivery of very highly modulated neutron treatments are incremental improvements upon already implemented hardware systems that should be readily achievable.« less
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki
2017-01-01
Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073
Adedeji, A. J.; Abdu, P. A.; Luka, P. D.; Owoade, A. A.; Joannis, T. M.
2017-01-01
Aim: This study was designed to optimize and apply the use of loop-mediated isothermal amplification (LAMP) as an alternative to conventional polymerase chain reaction (PCR) for the detection of herpesvirus of turkeys (HVT) (FC 126 strain) in vaccinated and non-vaccinated poultry in Nigeria. Materials and Methods: HVT positive control (vaccine) was used for optimization of LAMP using six primers that target the HVT070 gene sequence of the virus. These primers can differentiate HVT, a Marek’s disease virus (MDV) serotype 3 from MDV serotypes 1 and 2. Samples were collected from clinical cases of Marek’s disease (MD) in chickens, processed and subjected to LAMP and PCR. Results: LAMP assay for HVT was optimized. HVT was detected in 60% (3/5) and 100% (5/5) of the samples analyzed by PCR and LAMP, respectively. HVT was detected in the feathers, liver, skin, and spleen with average DNA purity of 3.05-4.52 μg DNA/mg (A260/A280) using LAMP. Conventional PCR detected HVT in two vaccinated and one unvaccinated chicken samples, while LAMP detected HVT in two vaccinated and three unvaccinated corresponding chicken samples. However, LAMP was a faster and simpler technique to carry out than PCR. Conclusion: LAMP assay for the detection of HVT was optimized. LAMP and PCR detected HVT in clinical samples collected. LAMP assay can be a very good alternative to PCR for detection of HVT and other viruses. This is the first report of the use of LAMP for the detection of viruses of veterinary importance in Nigeria. LAMP should be optimized as a diagnostic and research tool for investigation of poultry diseases such as MD in Nigeria. PMID:29263603
SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazareth, D; Spaans, J
Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less
Murtaza, Ghulam; Mehmood, Shahid; Rasul, Shahid; Murtaza, Imran; Khan, Ehsan Ullah
2018-01-01
The aim of study was to evaluate the dosimetric effect of collimator-rotation on VMAT plan quality, when using limited aperture multileaf collimator of Elekta Beam Modulator™ providing a maximum aperture of 21 cm × 16 cm. The increased use of VMAT technique to deliver IMRT from conventional to very specialized treatments present a challenge in plan optimization. In this study VMAT plans were optimized for prostate and head and neck cancers using Elekta Beam-Modulator TM , whereas previous studies were reported for conventional Linac aperture. VMAT plans for nine of each prostate and head-and-neck cancer patients were produced using the 6 MV photon beam for Elekta-SynergyS ® Linac using Pinnacle 3 treatment planning system. Single arc, dual arc and two combined independent-single arcs were optimized for collimator angles (C) 0°, 90° and 0°-90° (0°-90°; i.e. the first-arc was assigned C0° and second-arc was assigned C90°). A treatment plan comparison was performed among C0°, C90° and C(0°-90°) for single-arc dual-arc and two independent-single-arcs VMAT techniques to evaluate the influence of extreme collimator rotations (C0° and 90°) on VMAT plan quality. Plan evaluation criteria included the target coverage, conformity index, homogeneity index and doses to organs at risk. A 'two-sided student t -test' ( p ≤ 0.05) was used to determine if there was a significant difference in dose volume indices of plans. For both prostate and head-and-neck, plan quality at collimator angles C0° and C(0°-90°) was clinically acceptable for all VMAT-techniques, except SA for head-and-neck. Poorer target coverage, higher normal tissue doses and significant p -values were observed for collimator angle 90° when compared with C0° and C(0°-90°). A collimator rotation of 0° provided significantly better target coverage and sparing of organs-at-risk than a collimator rotation of 90° for all VMAT techniques.
Linear energy transfer incorporated intensity modulated proton therapy optimization
NASA Astrophysics Data System (ADS)
Cao, Wenhua; Khabazian, Azin; Yepes, Pablo P.; Lim, Gino; Poenisch, Falk; Grosshans, David R.; Mohan, Radhe
2018-01-01
The purpose of this study was to investigate the feasibility of incorporating linear energy transfer (LET) into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. However, if not explicitly incorporated into the optimization criteria, different IMPT plans may yield similar physical dose distributions but greatly different LET, specifically dose-averaged LET, distributions. Conventionally, the IMPT optimization criteria (or cost function) only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. Due to the fractional programming nature of the resulting model, we used a variable reformulation approach so that the optimization process is computationally equivalent to conventional IMPT optimization. In this study, five brain tumor patients who had been treated with proton therapy at our institution were selected. Two plans were created for each patient based on the proposed LET-incorporated optimization (LETOpt) and the conventional dose-based optimization (DoseOpt). The optimized plans were compared in terms of both dose (assuming a constant RBE of 1.1 as adopted in clinical practice) and LET. Both optimization approaches were able to generate comparable dose distributions. The LET-incorporated optimization achieved not only pronounced reduction of LET values in critical organs, such as brainstem and optic chiasm, but also increased LET in target volumes, compared to the conventional dose-based optimization. However, on occasion, there was a need to tradeoff the acceptability of dose and LET distributions. Our conclusion is that the inclusion of LET-dependent criteria in the IMPT optimization could lead to similar dose distributions as the conventional optimization but superior LET distributions in target volumes and normal tissues. This may have substantial advantages in improving tumor control and reducing normal tissue toxicities.
Planning: supporting and optimizing clinical guidelines execution.
Anselma, Luca; Montani, Stefania
2008-01-01
A crucial feature of computerized clinical guidelines (CGs) lies in the fact that they may be used not only as conventional documents (as if they were just free text) describing general procedures that users have to follow. In fact, thanks to a description of their actions and control flow in some semiformal representation language, CGs can also take advantage of Computer Science methods and Information Technology infrastructures and techniques, to become executable documents, in the sense that they may support clinical decision making and clinical procedures execution. In order to reach this goal, some advanced planning techniques, originally developed within the Artificial Intelligence (AI) community, may be (at least partially) resorted too, after a proper adaptation to the specific CG needs has been carried out.
Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
Machine learning for medical images analysis.
Criminisi, A
2016-10-01
This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Optimization of Surfactant Mixtures and Their Interfacial Behavior for Advanced Oil Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somasundaran, Prof. P.
2002-03-04
The objective of this project was to develop a knowledge base that is helpful for the design of improved processes for mobilizing and producing oil left untapped using conventional techniques. The main goal was to develop and evaluate mixtures of new or modified surfactants for improved oil recovery. In this regard, interfacial properties of novel biodegradable n-alkyl pyrrolidones and sugar-based surfactants have been studied systematically. Emphasis was on designing cost-effective processes compatible with existing conditions and operations in addition to ensuring minimal reagent loss.
Analysis of signal to noise enhancement using a highly selective modulation tracking filter
NASA Technical Reports Server (NTRS)
Haden, C. R.; Alworth, C. W.
1972-01-01
Experiments are reported which utilize photodielectric effects in semiconductor loaded superconducting resonant circuits for suppressing noise in RF communication systems. The superconducting tunable cavity acts as a narrow band tracking filter for detecting conventional RF signals. Analytical techniques were developed which lead to prediction of signal-to-noise improvements. Progress is reported in optimization of the experimental variables. These include improved Q, new semiconductors, improved optics, and simplification of the electronics. Information bearing signals were passed through the system, and noise was introduced into the computer model.
NASA Astrophysics Data System (ADS)
Ezerskaia, A.; Pereira, S. F.; Urbach, H. P.; Varghese, B.
2017-02-01
Skin barrier function relies on well balanced water and lipid system of stratum corneum. Optimal hydration and oiliness levels are indicators of skin health and integrity. We demonstrate an accurate and sensitive depth profiling of stratum corneum sebum and hydration levels using short wave infrared spectroscopy in the spectral range around 1720 nm. We demonstrate that short wave infrared spectroscopic technique combined with tape stripping can provide morequantitative and more reliable skin barrier function information in the low hydration regime, compared to conventional biophysical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Zheng, X; Liu, H
Purpose: This study is to evaluate the feasibility of simultaneously integrated boost (SIB) to hypoxic subvolume (HTV) in nasopharyngeal carcinomas under the guidance of 18F-Fluoromisonidazole (FMISO) PET/CT using a novel non-uniform volumetric modulated arc therapy (VMAT)technique. Methods: Eight nasopharyngeal carcinoma patients treated with conventional uniform VMAT were retrospectively analyzed. For each treatment, actual conventional uniform VMAT plan with two or more arcs (2–2.5 arcs, totally rotating angle < 1000o) was designed with dose boost to hopxic subvolume (total dose, 84Gy) in the gross tumor volme (GTV) under the guidance of 18F- FMISO PET/CT. Based on the same dataset, experimental singlemore » arc non-uniform VAMT plans were generated with the same dose prescription using customized software tools. Dosimetric parameters, quality assurance and the efficiency of the treatment delivery were compared between the uniform and non-uniform VMAT plans. Results: To develop the non-uniform VMAT technique, a specific optimization model was successfully established. Both techniques generate high-quality plans with pass rate (>98%) with the 3mm, 3% criterion. HTV received dose of 84.1±0.75Gy and 84.1±1.2Gy from uniform and non-uniform VMAT plans, respectively. In terms of target coverage and dose homogeneity, there was no significant statistical difference between actual and experimental plans for each case. However, for critical organs at risk (OAR), including the parotids, oral cavity and larynx, dosimetric difference was significant with better dose sparing form experimental plans. Regarding plan implementation efficiency, the average machine time was 3.5 minutes for the actual VMAT plans and 3.7 minutes for the experimental nonuniform VMAT plans (p>0.050). Conclusion: Compared to conventional VMAT technique, the proposed non-uniform VMAT technique has the potential to produce efficient and safe treatment plans, especially in cases with complicated anatomical structures and demanding dose boost to subvolumes.« less
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Screen printing technology applied to silicon solar cell fabrication
NASA Technical Reports Server (NTRS)
Thornhill, J. W.; Sipperly, W. E.
1980-01-01
The process for producing space qualified solar cells in both the conventional and wraparound configuration using screen printing techniques was investigated. Process modifications were chosen that could be easily automated or mechanized. Work was accomplished to optimize the tradeoffs associated with gridline spacing, gridline definition and junction depth. An extensive search for possible front contact metallization was completed. The back surface field structures along with the screen printed back contacts were optimized to produce open circuit voltages of at least an average of 600 millivolts. After all intended modifications on the process sequence were accomplished, the cells were exhaustively tested. Electrical tests at AMO and 28 C were made before and after boiling water immersion, thermal shock, and storage under conditions of high temperature and high humidity.
Mathematical Analysis and Optimization of Infiltration Processes
NASA Technical Reports Server (NTRS)
Chang, H.-C.; Gottlieb, D.; Marion, M.; Sheldon, B. W.
1997-01-01
A variety of infiltration techniques can be used to fabricate solid materials, particularly composites. In general these processes can be described with at least one time dependent partial differential equation describing the evolution of the solid phase, coupled to one or more partial differential equations describing mass transport through a porous structure. This paper presents a detailed mathematical analysis of a relatively simple set of equations which is used to describe chemical vapor infiltration. The results demonstrate that the process is controlled by only two parameters, alpha and beta. The optimization problem associated with minimizing the infiltration time is also considered. Allowing alpha and beta to vary with time leads to significant reductions in the infiltration time, compared with the conventional case where alpha and beta are treated as constants.
Robust L1-norm two-dimensional linear discriminant analysis.
Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang
2015-05-01
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bodechtel, J.; Nithack, J.; Dibernardo, G.; Hiller, K.; Jaskolla, F.; Smolka, A.
1975-01-01
Utilizing LANDSAT and Skylab multispectral imagery of 1972 and 1973, a land use map of the mountainous regions of Italy was evaluated at a scale of 1:250,000. Seven level I categories were identified by conventional methods of photointerpretation. Images of multispectral scanner (MSS) bands 5 and 7, or equivalents were mainly used. Areas of less than 200 by 200 m were classified and standard procedures were established for interpretation of multispectral satellite imagery. Land use maps were produced for central and southern Europe indicating that the existing land use maps could be updated and optimized. The complexity of European land use patterns, the intensive morphology of young mountain ranges, and time-cost calculations are the reasons that the applied conventional techniques are superior to automatic evaluation.
NASA Technical Reports Server (NTRS)
1993-01-01
Johnson Space Flight Center's device to test astronauts' heart function in microgravity has led to the MultiWire Gamma Camera, which images heart conditions six times faster than conventional devices. Dr. Jeffrey Lacy, who developed the technology as a NASA researcher, later formed Proportional Technologies, Inc. to develop a commercially viable process that would enable use of Tantalum-178 (Ta-178), a radio-pharmaceutical. His company supplies the generator for the radioactive Ta-178 to Xenos Medical Systems, which markets the camera. Ta-178 can only be optimally imaged with the camera. Because the body is subjected to it for only nine minutes, the radiation dose is significantly reduced and the technique can be used more frequently. Ta-178 also enables the camera to be used on pediatric patients who are rarely studied with conventional isotopes because of the high radiation dosage.
Tannamala, Pavan Kumar; Azhagarasan, Nagarasampatti Sivaprakasam; Shankar, K Chitra
2013-01-01
Conventional casting techniques following the manufacturers' recommendations are time consuming. Accelerated casting techniques have been reported, but their accuracy with base metal alloys has not been adequately studied. We measured the vertical marginal gap of nickel-chromium copings made by conventional and accelerated casting techniques and determined the clinical acceptability of the cast copings in this study. Experimental design, in vitro study, lab settings. Ten copings each were cast by conventional and accelerated casting techniques. All copings were identical, only their mold preparation schedules differed. Microscopic measurements were recorded at ×80 magnification on the perpendicular to the axial wall at four predetermined sites. The marginal gap values were evaluated by paired t test. The mean marginal gap by conventional technique (34.02 μm) is approximately 10 μm lesser than that of accelerated casting technique (44.62 μm). As the P value is less than 0.0001, there is highly significant difference between the two techniques with regard to vertical marginal gap. The accelerated casting technique is time saving and the marginal gap measured was within the clinically acceptable limits and could be an alternative to time-consuming conventional techniques.
NASA Astrophysics Data System (ADS)
Xu, Zhicheng; Yuan, Bo; Zhang, Fuqiang
2018-06-01
In this paper, a power supply optimization model is proposed. The model takes the minimum fossil energy consumption as the target, considering the output characteristics of the conventional power supply and the renewable power supply. The optimal capacity ratio of wind-solar in the power supply under various constraints is calculated, and the interrelation between conventional power supply and renewable energy is analyzed in the system of high proportion renewable energy integration. Using the model, we can provide scientific guidance for the coordinated and orderly development of renewable energy and conventional power sources.
Optimal 3D culture of primary articular chondrocytes for use in the rotating wall vessel bioreactor.
Mellor, Liliana F; Baker, Travis L; Brown, Raquel J; Catlin, Lindsey W; Oxford, Julia Thom
2014-08-01
Reliable culturing methods for primary articular chondrocytes are essential to study the effects of loading and unloading on joint tissue at the cellular level. Due to the limited proliferation capacity of primary chondrocytes and their tendency to dedifferentiate in conventional culture conditions, long-term culturing conditions of primary chondrocytes can be challenging. The goal of this study was to develop a suspension culturing technique that not only would retain the cellular morphology, but also maintain the gene expression characteristics of primary articular chondrocytes. Three-dimensional culturing methods were compared and optimized for primary articular chondrocytes in the rotating wall vessel bioreactor, which changes the mechanical culture conditions to provide a form of suspension culture optimized for low shear and turbulence. We performed gene expression analysis and morphological characterization of cells cultured in alginate beads, Cytopore-2 microcarriers, primary monolayer culture, and passaged monolayer cultures using reverse transcription-PCR and laser scanning confocal microscopy. Primary chondrocytes grown on Cytopore-2 microcarriers maintained the phenotypical morphology and gene expression pattern observed in primary bovine articular chondrocytes, and retained these characteristics for up to 9 d. Our results provide a novel and alternative culturing technique for primary chondrocytes suitable for studies that require suspension such as those using the rotating wall vessel bioreactor. In addition, we provide an alternative culturing technique for primary chondrocytes that can impact future mechanistic studies of osteoarthritis progression, treatments for cartilage damage and repair, and cartilage tissue engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, L; Eldib, A; Li, J
Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reducemore » the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koivisto, H., E-mail: hannu.koivisto@phys.jyu.fi; Tarvainen, O.; Toivanen, V.
2014-02-15
Radioactive Ion Beams play an increasingly important role in several European research facility programs such as SPES, SPIRAL1 Upgrade, and SPIRAL2, but even more for those such as EURISOL. Although remarkable advances of ECRIS charge breeders (CBs) have been achieved, further studies are needed to gain insight on the physics of the charge breeding process. The fundamental plasma processes of charge breeders are studied in the frame of the European collaboration project, EMILIE, for optimizing the charge breeding. Important information on the charge breeding can be obtained by conducting similar experiments using the gas mixing and 2-frequency heating techniques withmore » a conventional JYFL 14 GHz ECRIS and the LPSC-PHOENIX charge breeder. The first experiments were carried out with noble gases and they revealed, for example, that the effects of the gas mixing and 2-frequency heating on the production of high charge states appear to be additive for the conventional ECRIS. The results also indicate that at least in the case of noble gases the differences between the conventional ECRIS and the charge breeder cause only minor impact on the production efficiency of ion beams.« less
Optimization of immunostaining on flat-mounted human corneas.
Forest, Fabien; Thuret, Gilles; Gain, Philippe; Dumollard, Jean-Marc; Peoc'h, Michel; Perrache, Chantal; He, Zhiguo
2015-01-01
In the literature, immunohistochemistry on cross sections is the main technique used to study protein expression in corneal endothelial cells (ECs), even though this method allows visualization of few ECs, without clear subcellular localization, and is subject to the staining artifacts frequently encountered at tissue borders. We previously proposed several protocols, using fixation in 0.5% paraformaldehyde (PFA) or in methanol, allowing immunostaining on flatmounted corneas for proteins of different cell compartments. In the present study, we further refined the technique by systematically assessing the effect of fixative temperature. Last, we used optimized protocols to further demonstrate the considerable advantages of immunostaining on flatmounted intact corneas: detection of rare cells in large fields of thousands of ECs and epithelial cells, and accurate subcellular localization of given proteins. The staining of four ubiquitous proteins, ZO-1, hnRNP L, actin, and histone H3, with clearly different subcellular localizations, was analyzed in ECs of organ-cultured corneas. Whole intact human corneas were fixed for 30 min in 0.5% paraformaldehyde or pure methanol at four temperatures (4 °C for PFA, -20 °C for methanol, and 23, 37, and 50 °C for both). Experiments were performed in duplicate and repeated on three corneas. Standardized pictures were analyzed independently by two experts. Second, optimized immunostaining protocols were applied to fresh corneas for three applications: identification of rare cells that express KI67 in the endothelium of specimens with Fuch's endothelial corneal dystrophy (FECD), the precise localization of neural cell adhesion molecules (NCAMs) in normal ECs and of the cytokeratin pair K3/12 and CD44 in normal epithelial cells, and the identification of cells that express S100b in the normal epithelium. Temperature strongly influenced immunostaining quality. There was no ubiquitous protocol, but nevertheless, room temperature may be recommended as first-line temperature during fixation, instead of the conventional -20 °C for methanol and 4 °C for PFA. Further optimization may be required for certain target proteins. Optimized protocols allowed description of two previously unknown findings: the presence of a few proliferating ECs in FECD specimens, suggesting ineffective compensatory mechanisms against premature EC death, and the localization of NCAMs exclusively in the lateral membranes of ECs, showing hexagonal organization at the apical pole and an irregular shape with increasing complexity toward the basal pole. Optimized protocols were also effective for the epithelium, allowing clear localization of cytokeratin 3/12 and CD44 in superficial and basal epithelial cells, respectively. Finally, S100b allowed identification of clusters of epithelial Langerhans cells near the limbus and more centrally. Fixative temperature is a crucial parameter in optimizing immunostaining on flatmounted intact corneas. Whole-tissue overview and precise subcellular staining are significant advantages over conventional immunohistochemistry (IHC) on cross sections. This technique, initially developed for the corneal endothelium, proved equally suitable for the corneal epithelium and could be used for other superficial mono- and multilayered epithelia.
The trade-off between morphology and control in the co-optimized design of robots.
Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.
The trade-off between morphology and control in the co-optimized design of robots
Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong
2012-01-01
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694
Ozhinsky, Eugene; Vigneron, Daniel B; Nelson, Sarah J
2011-04-01
To develop a technique for optimizing coverage of brain 3D (1) H magnetic resonance spectroscopic imaging (MRSI) by automatic placement of outer-volume suppression (OVS) saturation bands (sat bands) and to compare the performance for point-resolved spectroscopic sequence (PRESS) MRSI protocols with manual and automatic placement of sat bands. The automated OVS procedure includes the acquisition of anatomic images from the head, obtaining brain and lipid tissue maps, calculating optimal sat band placement, and then using those optimized parameters during the MRSI acquisition. The data were analyzed to quantify brain coverage volume and data quality. 3D PRESS MRSI data were acquired from three healthy volunteers and 29 patients using protocols that included either manual or automatic sat band placement. On average, the automatic sat band placement allowed the acquisition of PRESS MRSI data from 2.7 times larger brain volumes than the conventional method while maintaining data quality. The technique developed helps solve two of the most significant problems with brain PRESS MRSI acquisitions: limited brain coverage and difficulty in prescription. This new method will facilitate routine clinical brain 3D MRSI exams and will be important for performing serial evaluation of response to therapy in patients with brain tumors and other neurological diseases. Copyright © 2011 Wiley-Liss, Inc.
Development efforts to improve curved-channel microchannel plates
NASA Technical Reports Server (NTRS)
Corbett, M. B.; Feller, W. B.; Laprade, B. N.; Cochran, R.; Bybee, R.; Danks, A.; Joseph, C.
1993-01-01
Curved-channel microchannel plate (C-plate) improvements resulting from an ongoing NASA STIS microchannel plate (MCP) development program are described. Performance limitations of previous C-plates led to a development program in support of the STIS MAMA UV photon counter, a second generation instrument on the Hubble Space Telescope. C-plate gain, quantum detection efficiency, dark noise, and imaging distortion, which are influenced by channel curvature non-uniformities, have all been improved through use of a new centrifuge fabrication technique. This technique will be described, along with efforts to improve older, more conventional shearing methods. Process optimization methods used to attain targeted C-plate performance goals will be briefly characterized. Newly developed diagnostic measurement techniques to study image distortion, gain uniformity, input bias angle, channel curvature, and ion feedback, will be described. Performance characteristics and initial test results of the improved C-plates will be reported. Future work and applications will also be discussed.
Development and fabrication of patient-specific knee implant using additive manufacturing techniques
NASA Astrophysics Data System (ADS)
Zammit, Robert; Rochman, Arif
2017-10-01
Total knee replacement is the most effective treatment to relief pain and restore normal function in a diseased knee joint. The aim of this research was to develop a patient-specific knee implant which can be fabricated using additive manufacturing techniques and has reduced wear rates using a highly wear resistant materials. The proposed design was chosen based on implant requirements, such as reduction in wear rates as well as strong fixation. The patient-specific knee implant improves on conventional knee implants by modifying the articulating surfaces and bone-implant interfaces. Moreover, tribological tests of different polymeric wear couples were carried out to determine the optimal materials to use for the articulating surfaces. Finite element analysis was utilized to evaluate the stresses sustained by the proposed design. Finally, the patient-specific knee implant was successfully built using additive manufacturing techniques.
Analysis of FIB-induced damage by electron channelling contrast imaging in the SEM.
Gutierrez-Urrutia, Ivan
2017-01-01
We have investigated the Ga + ion-damage effect induced by focused ion beam (FIB) milling in a [001] single crystal of a 316 L stainless steel by the electron channelling contrast imaging (ECCI) technique. The influence of FIB milling on the characteristic electron channelling contrast of surface dislocations was analysed. The ECCI approach provides sound estimation of the damage depth produced by FIB milling. For comparison purposes, we have also studied the same milled surface by a conventional electron backscatter diffraction (EBSD) approach. We observe that the ECCI approach provides further insight into the Ga + ion-damage phenomenon than the EBSD technique by direct imaging of FIB artefacts in the scanning electron microscope. We envisage that the ECCI technique may be a convenient tool to optimize the FIB milling settings in applications where the surface crystal defect content is relevant. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Stochastic Optical Reconstruction Microscopy (STORM).
Xu, Jianquan; Ma, Hongqiang; Liu, Yang
2017-07-05
Super-resolution (SR) fluorescence microscopy, a class of optical microscopy techniques at a spatial resolution below the diffraction limit, has revolutionized the way we study biology, as recognized by the Nobel Prize in Chemistry in 2014. Stochastic optical reconstruction microscopy (STORM), a widely used SR technique, is based on the principle of single molecule localization. STORM routinely achieves a spatial resolution of 20 to 30 nm, a ten-fold improvement compared to conventional optical microscopy. Among all SR techniques, STORM offers a high spatial resolution with simple optical instrumentation and standard organic fluorescent dyes, but it is also prone to image artifacts and degraded image resolution due to improper sample preparation or imaging conditions. It requires careful optimization of all three aspects-sample preparation, image acquisition, and image reconstruction-to ensure a high-quality STORM image, which will be extensively discussed in this unit. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Ding, Shiming; Wang, Yan; Xu, Di; Zhu, Chungang; Zhang, Chaosheng
2013-07-16
We report a highly promising technique for the high-resolution imaging of labile phosphorus (P) in sediments and soils in combination with the diffusive gradients in thin films (DGT). This technique was based on the surface coloration of the Zr-oxide binding gel using the conventional molybdenum blue method following the DGT uptake of P to this gel. The accumulated mass of the P in the gel was then measured according to the grayscale intensity on the gel surface using computer-imaging densitometry. A pretreatment of the gel in hot water (85 °C) for 5 d was required to immobilize the phosphate and the formed blue complex in the gel during the color development. The optimal time required for a complete color development was determined to be 45 min. The appropriate volume of the coloring reagent added was 200 times of that of the gel. A calibration equation was established under the optimized conditions, based on which a quantitative measurement of P was obtained when the concentration of P in solutions ranged from 0.04 mg L(-1) to 4.1 mg L(-1) for a 24 h deployment of typical DGT devices at 25 °C. The suitability of the coloration technique was well demonstrated by the observation of small, discrete spots with elevated P concentrations in a sediment profile.
NASA Astrophysics Data System (ADS)
Srivastava, Y.; Srivastava, S.; Boriwal, L.
2016-09-01
Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Dual-energy in mammography: feasibility study
NASA Astrophysics Data System (ADS)
Jafroudi, Hamid; Lo, Shih-Chung B.; Li, Huai; Steller Artz, Dorothy E.; Freedman, Matthew T.; Mun, Seong K.
1996-04-01
The purpose of this work is to examine the feasibility of dual-energy techniques to enhance the detection of microcalcifications in digital mammography. The digital mammography system used in this study consists of two different mammography systems; one is the conventional mammography system with molybdenum target and Mo filtration and the other is the clinical version of a low dose x-ray system with tungsten target and aluminum filtration. The low dose system is optimized for screen-film mammography with a highly efficient scatter rejection device built by Fischer Imaging Systems for evaluation at NIH. The system was designed by the University of Southern California based on multiparameter optimization techniques. Prototypes of this system have been constructed and evaluated at the Center for Devices and Radiological Health. The digital radiography system is based on the Fuji 9000 computed radiography (CR) system which uses a storage phosphor imaging plate as the receptor. High resolution plates (HR-V) are used in this study. Dual-energy is one technique to reduce the structured noise associated with the complexity of the background of normal anatomy surrounding a lesion. This can be done by taking the advantage of the x-ray attenuation characteristics of two different structures such as soft tissue and bone in chest radiography. We have applied this technique to the detection of microcalcifications in mammography. The overall system performance based on this technique is evaluated. Results presented are based on the evaluation of phantom images.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Evaluation of the optimal strategy for ex situ bioremediation of diesel oil-contaminated soil.
Lin, Ta-Chen; Pan, Po-Tsen; Young, Chiu-Chung; Chang, Jo-Shu; Chang, Tsung-Chung; Cheng, Sheng-Shung
2011-11-01
Bioaugmentation and biostimulation have been widely applied in the remediation of oil contamination. However, ambiguous results have been reported. It is important to reveal the controlling factors on the field for optimal selection of remediation strategy. In this study, an integrated field landfarming technique was carried out to assess the relative effectiveness of five biological approaches on diesel degradation. The limiting factors during the degradation process were discussed. A total of five treatments were tested, including conventional landfarming, nutrient enhancement (NE), biosurfactant addition (BS), bioaugmentation (BA), and combination of bioaugmentation and biosurfactant addition (BAS). The consortium consisted of four diesel-degrading bacteria strains. Rhamnolipid was used as the biosurfactant. The diesel concentration, bacterial population, evolution of CO(2), and bacterial community in the soil were periodically measured. The best overall degradation efficiency was achieved by BAS treatment (90 ± 2%), followed by BA (86 ± 2%), NE (84 ± 3%), BS (78 ± 3%), and conventional landfarming (68 ± 3%). In the early stage, the total petroleum hydrocarbon was degraded 10 times faster than the degradation rates measured during the period from day 30 to 100. At the later stage, the degradation rates were similar among treatments. In the conventional landfarming, contaminated soil contained bacteria ready for diesel degradation. The availability of hydrocarbon was likely the limiting factor in the beginning of the degradation process. At the later stage, the degradation was likely limited by desorption and mass transfer of hydrocarbon in the soil matrix.
Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.
2016-01-01
The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463
Lens-free microscopy of cerebrospinal fluid for the laboratory diagnosis of meningitis
NASA Astrophysics Data System (ADS)
Delacroix, Robin; Morel, Sophie Nhu An; Hervé, Lionel; Bordy, Thomas; Blandin, Pierre; Dinten, Jean-Marc; Drancourt, Michel; Allier, Cédric
2018-02-01
The cytology of the cerebrospinal fluid is traditionally performed by an operator (physician, biologist) by means of a conventional light microscope. The operator visually counts the leukocytes (white blood cells) present in a sample of cerebrospinal fluid (10 μl). It is a tedious job and the result is operator-dependent. Here in order to circumvent the limitations of manual counting, we approach the question of numeration of erythrocytes and leukocytes for the cytological diagnosis of meningitis by means of lens-free microscopy. In a first step, a prospective counts of leukocytes was performed by five different operators using conventional optical microscopy. The visual counting yielded an overall 16.7% misclassification of 72 cerebrospinal fluid specimens in meningitis/non-meningitis categories using a 10 leukocyte/μL cut-off. In a second step, the lens-free microscopy algorithm was adapted step-by-step for counting cerebrospinal fluid cells and discriminating leukocytes from erythrocytes. The optimization of the automatic lens-free counting was based on the prospective analysis of 215 cerebrospinal fluid specimens. The optimized algorithm yielded a 100% sensitivity and a 86% specificity compared to confirmed diagnostics. In a third step, a blind lens-free microscopic analysis of 116 cerebrospinal fluid specimens, including six cases of microbiology confirmed infectious meningitis, yielded a 100% sensitivity and a 79% specificity. Adapted lens-free microscopy is thus emerging as an operator-independent technique for the rapid numeration of leukocytes and erythrocytes in cerebrospinal fluid. In particular, this technique is well suited to the rapid diagnosis of meningitis at point-of-care laboratories.
Li, Nailu; Mu, Anle; Yang, Xiyun; Magar, Kaman T; Liu, Chao
2018-05-01
The optimal tuning of adaptive flap controller can improve adaptive flap control performance on uncertain operating environments, but the optimization process is usually time-consuming and it is difficult to design proper optimal tuning strategy for the flap control system (FCS). To solve this problem, a novel adaptive flap controller is designed based on a high-efficient differential evolution (DE) identification technique and composite adaptive internal model control (CAIMC) strategy. The optimal tuning can be easily obtained by DE identified inverse of the FCS via CAIMC structure. To achieve fast tuning, a high-efficient modified adaptive DE algorithm is proposed with new mutant operator and varying range adaptive mechanism for the FCS identification. A tradeoff between optimized adaptive flap control and low computation cost is successfully achieved by proposed controller. Simulation results show the robustness of proposed method and its superiority to conventional adaptive IMC (AIMC) flap controller and the CAIMC flap controllers using other DE algorithms on various uncertain operating conditions. The high computation efficiency of proposed controller is also verified based on the computation time on those operating cases. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Computer Based Porosity Design by Multi Phase Topology Optimization
NASA Astrophysics Data System (ADS)
Burblies, Andreas; Busse, Matthias
2008-02-01
A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.
Tapper, Anna-Maija; Hannola, Mikko; Zeitlin, Rainer; Isojärvi, Jaana; Sintonen, Harri; Ikonen, Tuija S
2014-06-01
In order to assess the effectiveness and costs of robot-assisted hysterectomy compared with conventional techniques we reviewed the literature separately for benign and malignant conditions, and conducted a cost analysis for different techniques of hysterectomy from a hospital economic database. Unlimited systematic literature search of Medline, Cochrane and CRD databases produced only two randomized trials, both for benign conditions. For the outcome assessment, data from two HTA reports, one systematic review, and 16 original articles were extracted and analyzed. Furthermore, one cost modelling and 13 original cost studies were analyzed. In malignant conditions, less blood loss, fewer complications and a shorter hospital stay were considered as the main advantages of robot-assisted surgery, like any mini-invasive technique when compared to open surgery. There were no significant differences between the techniques regarding oncological outcomes. When compared to laparoscopic hysterectomy, the main benefit of robot-assistance was a shorter learning curve associated with fewer conversions but the length of robotic operation was often longer. In benign conditions, no clinically significant differences were reported and vaginal hysterectomy was considered the optimal choice when feasible. According to Finnish data, the costs of robot-assisted hysterectomies were 1.5-3 times higher than the costs of conventional techniques. In benign conditions the difference in cost was highest. Because of expensive disposable supplies, unit costs were high regardless of the annual number of robotic operations. Hence, in the current distribution of cost pattern, economical effectiveness cannot be markedly improved by increasing the volume of robotic surgery. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Stöggl, Thomas; Müller, Erich; Lindinger, Stefan
2008-09-01
The aims of the study were to: (1) adapt the "double-push" technique from inline skating to cross-country skiing; (2) compare this new skiing technique with the conventional skate skiing cross-country technique; and (3) test the hypothesis that the double-push technique improves skiing speed in a short sprint. 13 elite skiers performed maximum-speed sprints over 100 m using the double-push skate skiing technique and using the conventional "V2" skate skiing technique. Pole and plantar forces, knee angle, cycle characteristics, and electromyography of nine lower body muscles were analysed. We found that the double-push technique could be successfully transferred to cross-country skiing, and that this new technique is faster than the conventional skate skiing technique. The double-push technique was 2.9 +/- 2.2% faster (P < 0.001), which corresponds to a time advantage of 0.41 +/- 0.31 s over 100 m. The double-push technique had a longer cycle length and a lower cycle rate, and it was characterized by higher muscle activity, higher knee extension amplitudes and velocities, and higher peak foot forces, especially in the first phase of the push-off. Also, the foot was more loaded laterally in the double-push technique than in the conventional skate skiing technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
Kurihara, Miki; Ikeda, Koji; Izawa, Yoshinori; Deguchi, Yoshihiro; Tarui, Hitoshi
2003-10-20
A laser-induced breakdown spectroscopy (LIBS) technique has been applied for detection of unburned carbon in fly ash, and an automated LIBS unit has been developed and applied in a 1000-MW pulverized-coal-fired power plant for real-time measurement, specifically of unburned carbon in fly ash. Good agreement was found between measurement results from the LIBS method and those from the conventional method (Japanese Industrial Standard 8815), with a standard deviation of 0.27%. This result confirms that the measurement of unburned carbon in fly ash by use of LIBS is sufficiently accurate for boiler control. Measurements taken by this apparatus were also integrated into a boiler-control system with the objective of achieving optimal and stable combustion. By control of the rotating speed of a mill rotary separator relative to measured unburned-carbon content, it has been demonstrated that boiler control is possible in an optimized manner by use of the value of the unburned-carbon content of fly ash.
An update on coating/manufacturing techniques of microneedles.
Tarbox, Tamara N; Watts, Alan B; Cui, Zhengrong; Williams, Robert O
2017-12-29
Recently, results have been published for the first successful phase I human clinical trial investigating the use of dissolving polymeric microneedles… Even so, further clinical development represents an important hurdle that remains in the translation of microneedle technology to approved products. Specifically, the potential for accumulation of polymer within the skin upon repeated application of dissolving and coated microneedles, combined with a lack of safety data in humans, predicates a need for further clinical investigation. Polymers are an important consideration for microneedle technology-from both manufacturing and drug delivery perspectives. The use of polymers enables a tunable delivery strategy, but the scalability of conventional manufacturing techniques could arguably benefit from further optimization. Micromolding has been suggested in the literature as a commercially viable means to mass production of both dissolving and swellable microneedles. However, the reliance on master molds, which are commonly manufactured using resource intensive microelectronics industry-derived processes, imparts notable material and design limitations. Further, the inherently multi-step filling and handling processes associated with micromolding are typically batch processes, which can be challenging to scale up. Similarly, conventional microneedle coating processes often follow step-wise batch processing. Recent developments in microneedle coating and manufacturing techniques are highlighted, including micromilling, atomized spraying, inkjet printing, drawing lithography, droplet-born air blowing, electro-drawing, continuous liquid interface production, 3D printing, and polyelectrolyte multilayer coating. This review provides an analysis of papers reporting on potentially scalable production techniques for the coating and manufacturing of microneedles.
[Virtual CT-pneumocystoscopy: indications, advantages and limitations. Our experience].
Regine, Giovanni; Atzori, Maurizio; Buffa, Vitaliano; Miele, Vittorio; Ialongo, Pasquale; Adami, Loredana
2003-09-01
The use of CT volume-rendering techniques allows the evaluation of visceral organs without the need for endoscopy. Conventional endoscopic evaluation of the bladder is limited by the invasiveness of the technique and the difficulty exploring the entire bladder. Virtual evaluation of the bladder by three-dimensional CT reconstruction offers potential advantages and can be used in place of endoscopy. This study investigates the sensitivity of virtual CT in assessing lesion of the bladder wall to compare it with that of conventional endoscopy, and outlines the indications, advantages and disadvantages of virtual CT-pneumocystography. Between September 2001 and May 2002, 21 patients with haematuria and positive cystoscopic findings were studied. After an initial assessment by ultrasound, the patients underwent pelvic CT as a single volumetric scan after preliminary air distension of the bladder by means of 12 F Foley catheter. The images were processed on an independent workstation (Advantage 3.0 GE) running dedicated software for endoluminal navigation. The lesions detected by endoscopy were classified as sessile or pedunculated, and according to size (more or less than 5 mm). Finally, the results obtained at virtual cystoscopy were evaluated by two radiologists blinded to the conventional cystoscopy results. Thirty lesions (24 pedunculated, 6 sessile) were detected at conventional cystoscopy in 16 patients (multiple polyposis in 3 cases). Virtual cystoscopy identified 23 lesions (19 pedunculated and 4 sessile). The undetected lesions were pedunculated <5 mm (5 cases) and sessile (2 cases). One correctly identified pedunculated lesion was associated with a bladder stone. Good quality virtual images were obtained in all of the patients. In only one patient with multiple polyposis the quality of the virtual endoscopic evaluation was limited by the patient's intolerance to bladder distension, although identification of the lesions was not compromised. The overall sensitivity was 77%; this was higher for pedunculated lesions (79%) than for sessile lesions (50%). The virtual technique is less invasive and tends to be associated with fewer complications than is conventional cystoscopy. It also demonstrated a good sensitivity for evaluating pedunculated lesions, allowing evaluation of the bladder base and anterior wall, sites that are commonly poorly accessible at conventional cystoscopy. Further advantages of the virtual technique include the possibility of accurately measuring the extent of the lesion and obtaining virtual images even in patients with severe urethral obstruction and active bleeding. The limitations include the inability to obtain tissue for histologic examination or to perform endoscopic resection of pedunculated lesions. The technique is less sensitive than conventional cystoscopy in the detection of sessile lesions or very small polyps (<5 mm). Furthermore, diffuse wall thickening reduces bladder distension thereby preventing optimal evaluation. The most valuable indication appears to be the follow-up of treated wall lesions. Virtual CT-pneumocystoscopy can replace conventional cystoscopy in cases with pedunculated lesions when there is no need for biopsy, when the lesions are located at the bladder base or when cystoscopic instrumentation cannot be introduced into the bladder due to stenosis. Virtual pneumocystoscopy can also be used in the follow-up of treated polypoid lesions in association with pelvic CT-angiography.
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Kutkut, Ahmad; Abu-Hammad, Osama; Frazer, Robert
2016-01-01
Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations. PMID:29563457
Fast wavefront optimization for focusing through biological tissue (Conference Presentation)
NASA Astrophysics Data System (ADS)
Blochet, Baptiste; Bourdieu, Laurent; Gigan, Sylvain
2017-02-01
The propagation of light in biological tissues is rapidly dominated by multiple scattering: ballistic light is exponentially attenuated, which limits the penetration depth of conventional microscopy techniques. For coherent light, the recombination of the different scattered paths creates a complex interference: speckle. Recently, different wavefront shaping techniques have been developed to coherently manipulate the speckle. It opens the possibility to focus light through complex media and ultimately to image in them, provided however that the medium can be considered as stationary. We have studied the possibility to focus in and through time-varying biological tissues. Their intrinsic temporal dynamics creates a fast decorrelation of the speckle pattern. Therefore, focusing through biological tissues requires fast wavefront shaping devices, sensors and algorithms. We have investigated the use of a MEMS-based spatial light modulator (SLM) and a fast photodetector, combined with FPGA electronics to implement a closed-loop optimization. Our optimization process is just limited by the temporal dynamics of the SLM (200µs) and the computation time (45µs), thus corresponding to a rate of 4 kHz. To our knowledge, it's the fastest closed loop optimization using phase modulators. We have studied the focusing through colloidal solutions of TiO2 particles in glycerol, allowing tunable temporal stability, and scattering properties similar to biological tissues. We have shown that our set-up fulfills the required characteristics (speed, enhancement) to focus through biological tissues. We are currently investigating the focusing through acute rat brain slices and the memory effect in dynamic scattering media.
The influence of multispectral scanner spatial resolution on forest feature classification
NASA Technical Reports Server (NTRS)
Sadowski, F. G.; Malila, W. A.; Sarno, J. E.; Nalepka, R. F.
1977-01-01
Inappropriate spatial resolution and corresponding data processing techniques may be major causes for non-optimal forest classification results frequently achieved from multispectral scanner (MSS) data. Procedures and results of empirical investigations are studied to determine the influence of MSS spatial resolution on the classification of forest features into levels of detail or hierarchies of information that might be appropriate for nationwide forest surveys and detailed in-place inventories. Two somewhat different, but related studies are presented. The first consisted of establishing classification accuracies for several hierarchies of features as spatial resolution was progressively coarsened from (2 meters) squared to (64 meters) squared. The second investigated the capabilities for specialized processing techniques to improve upon the results of conventional processing procedures for both coarse and fine resolution data.
NASA Astrophysics Data System (ADS)
Divecha, Mia S.; Derby, Jeffrey J.
2017-06-01
We employ finite-element modeling to assess the effects of the accelerated crucible rotation technique (ACRT) on cadmium zinc telluride (CZT) crystals grown from a gradient freeze system. Via consideration of tellurium segregation and transport, we show, for the first time, that steady growth from a tellurium-rich melt produces persistent undercooling in front of the growth interface, likely leading to morphological instability. The application of ACRT rearranges melt flows and tellurium transport but, in contrast to conventional wisdom, does not altogether eliminate undercooling of the melt. Rather, a much more complicated picture arises, where spatio-temporal realignment of undercooled melt may act to locally suppress instability. A better understanding of these mechanisms and quantification of their overall effects will allow for future growth optimization.
Kellogg, Joshua J; Wallace, Emily D; Graf, Tyler N; Oberlies, Nicholas H; Cech, Nadja B
2017-10-25
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. Copyright © 2017. Published by Elsevier B.V.
Advanced Protection & Service Restoration for FREEDM Systems
NASA Astrophysics Data System (ADS)
Singh, Urvir
A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.
On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier
2013-01-01
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier
2013-08-09
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
NASA Astrophysics Data System (ADS)
Scalabrin, G.; Marchi, P.; Finezzo, F.
2006-11-01
The application of an optimization technique to the available experimental data has led to the development of a new multiparameter equation λ = λ ( T,ρ ) for the representation of the thermal conductivity of 1,1-difluoroethane (R152a). The region of validity of the proposed equation covers the temperature range from 220 to 460 K and pressures up to 55 MPa, including the near-critical region. The average absolute deviation of the equation with respect to the selected 939 primary data points is 1.32%. The proposed equation represents therefore a significant improvement with respect to the literature conventional equation. The density value required by the equation is calculated at the chosen temperature and pressure conditions using a high accuracy equation of state for the fluid.
Comparative Risk Analysis for Metropolitan Solid Waste Management Systems
NASA Astrophysics Data System (ADS)
Chang, Ni-Bin; Wang, S. F.
1996-01-01
Conventional solid waste management planning usually focuses on economic optimization, in which the related environmental impacts or risks are rarely considered. The purpose of this paper is to illustrate the methodology of how optimization concepts and techniques can be applied to structure and solve risk management problems such that the impacts of air pollution, leachate, traffic congestion, and noise increments can be regulated in the iong-term planning of metropolitan solid waste management systems. Management alternatives are sequentially evaluated by adding several environmental risk control constraints stepwise in an attempt to improve the management strategies and reduce the risk impacts in the long run. Statistics associated with those risk control mechanisms are presented as well. Siting, routing, and financial decision making in such solid waste management systems can also be achieved with respect to various resource limitations and disposal requirements.
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
Discriminant locality preserving projections based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu; Li, Defang
2014-11-01
Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.
Immunological multimetal deposition for rapid visualization of sweat fingerprints.
He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin
2014-11-10
A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Moura, Renata Vasconcellos; Kojima, Alberto Noriyuki; Saraceni, Cintia Helena Coury; Bassolli, Lucas; Balducci, Ivan; Özcan, Mutlu; Mesquita, Alfredo Mikail Melo
2018-05-01
The increased use of CAD systems can generate doubt about the accuracy of digital impressions for angulated implants. The aim of this study was to evaluate the accuracy of different impression techniques, two conventional and one digital, for implants with and without angulation. We used a polyurethane cast that simulates the human maxilla according to ASTM F1839, and 6 tapered implants were installed with external hexagonal connections to simulate tooth positions 17, 15, 12, 23, 25, and 27. Implants 17 and 23 were placed with 15° of mesial angulation and distal angulation, respectively. Mini cone abutments were installed on these implants with a metal strap 1 mm in height. Conventional and digital impression procedures were performed on the maxillary master cast, and the implants were separated into 6 groups based on the technique used and measurement type: G1 - control, G2 - digital impression, G3 - conventional impression with an open tray, G4 - conventional impression with a closed tray, G5 - conventional impression with an open tray and a digital impression, and G6 - conventional impression with a closed tray and a digital impression. A statistical analysis was performed using two-way repeated measures ANOVA to compare the groups, and a Kruskal-Wallis test was conducted to analyze the accuracy of the techniques. No significant difference in the accuracy of the techniques was observed between the groups. Therefore, no differences were found among the conventional impression and the combination of conventional and digital impressions, and the angulation of the implants did not affect the accuracy of the techniques. All of the techniques exhibited trueness and had acceptable precision. The variation of the angle of the implants did not affect the accuracy of the techniques. © 2018 by the American College of Prosthodontists.
Stochastic noise characteristics in matrix inversion tomosynthesis (MITS).
Godfrey, Devon J; McAdams, H P; Dobbins, James T Third
2009-05-01
Matrix inversion tomosynthesis (MITS) uses known imaging geometry and linear systems theory to deterministically separate in-plane detail from residual tomographic blur in a set of conventional tomosynthesis ("shift-and-add") planes. A previous investigation explored the effect of scan angle (ANG), number of projections (N), and number of reconstructed planes (NP) on the MITS impulse response and modulation transfer function characteristics, and concluded that ANG = 20 degrees, N = 71, and NP = 69 is the optimal MITS imaging technique for chest imaging on our prototype tomosynthesis system. This article examines the effect of ANG, N, and NP on the MITS exposure-normalized noise power spectra (ENNPS) and seeks to confirm that the imaging parameters selected previously by an analysis of the MITS impulse response also yield reasonable stochastic properties in MITS reconstructed planes. ENNPS curves were generated for experimentally acquired mean-subtracted projection images, conventional tomosynthesis planes, and MITS planes with varying combinations of the parameters ANG, N, and NP. Image data were collected using a prototype tomosynthesis system, with 11.4 cm acrylic placed near the image receptor to produce lung-equivalent beam hardening and scattered radiation. Ten identically acquired tomosynthesis data sets (realizations) were collected for each selected technique and used to generate ensemble mean images that were subtracted from individual image realizations prior to noise power spectra (NPS) estimation. NPS curves were normalized to account for differences in entrance exposure (as measured with an ion chamber), yielding estimates of the ENNPS for each technique. Results suggest that mid- and high-frequency noise in MITS planes is fairly equivalent in magnitude to noise in conventional tomosynthesis planes, but low-frequency noise is amplified in the most anterior and posterior reconstruction planes. Selecting the largest available number of projections (N = 71) does not incur any appreciable additive electronic noise penalty compared to using fewer projections for roughly equivalent cumulative exposure. Stochastic noise is minimized by maximizing N and NP but increases with increasing ANG. The noise trend results for NP and ANG are contrary to what would be predicted by simply considering the MITS matrix conditioning and likely result from the interplay between noise correlation and the polarity of the MITS filters. From this study, the authors conclude that the previously determined optimal MITS imaging strategy based on impulse response considerations produces somewhat suboptimal stochastic noise characteristics, but is probably still the best technique for MITS imaging of the chest.
Zhou, Zhengwei; Bi, Xiaoming; Wei, Janet; Yang, Hsin-Jung; Dharmakumar, Rohan; Arsanjani, Reza; Bairey Merz, C Noel; Li, Debiao; Sharif, Behzad
2017-02-01
The presence of subendocardial dark-rim artifact (DRA) remains an ongoing challenge in first-pass perfusion (FPP) cardiac magnetic resonance imaging (MRI). We propose a free-breathing FPP imaging scheme with Cartesian sampling that is optimized to minimize the DRA and readily enables near-instantaneous image reconstruction. The proposed FPP method suppresses Gibbs ringing effects-a major underlying factor for the DRA-by "shaping" the underlying point spread function through a two-step process: 1) an undersampled Cartesian sampling scheme that widens the k-space coverage compared to the conventional scheme; and 2) a modified parallel-imaging scheme that incorporates optimized apodization (k-space data filtering) to suppress Gibbs-ringing effects. Healthy volunteer studies (n = 10) were performed to compare the proposed method against the conventional Cartesian technique-both using a saturation-recovery gradient-echo sequence at 3T. Furthermore, FPP imaging studies using the proposed method were performed in infarcted canines (n = 3), and in two symptomatic patients with suspected coronary microvascular dysfunction for assessment of myocardial hypoperfusion. Width of the DRA and the number of DRA-affected myocardial segments were significantly reduced in the proposed method compared to the conventional approach (width: 1.3 vs. 2.9 mm, P < 0.001; number of segments: 2.6 vs. 8.7; P < 0.0001). The number of slices with severe DRA was markedly lower for the proposed method (by 10-fold). The reader-assigned image quality scores were similar (P = 0.2), although the quantified myocardial signal-to-noise ratio was lower for the proposed method (P < 0.05). Animal studies showed that the proposed method can detect subendocardial perfusion defects and patient results were consistent with the gold-standard invasive test. The proposed free-breathing Cartesian FPP imaging method significantly reduces the prevalence of severe DRAs compared to the conventional approach while maintaining similar resolution and image quality. 2 J. Magn. Reson. Imaging 2017;45:542-555. © 2016 International Society for Magnetic Resonance in Medicine.
Parametric study of a canard-configured transport using conceptual design optimization
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.
1985-01-01
Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
Carlson, Matthew L; Leng, Shuai; Diehn, Felix E; Witte, Robert J; Krecke, Karl N; Grimes, Josh; Koeller, Kelly K; Bruesewitz, Michael R; McCollough, Cynthia H; Lane, John I
2017-08-01
A new generation 192-slice multi-detector computed tomography (MDCT) clinical scanner provides enhanced image quality and superior electrode localization over conventional MDCT. Currently, accurate and reliable cochlear implant electrode localization using conventional MDCT scanners remains elusive. Eight fresh-frozen cadaveric temporal bones were implanted with full-length cochlear implant electrodes. Specimens were subsequently scanned with conventional 64-slice and new generation 192-slice MDCT scanners utilizing ultra-high resolution modes. Additionally, all specimens were scanned with micro-CT to provide a reference criterion for electrode position. Images were reconstructed according to routine temporal bone clinical protocols. Three neuroradiologists, blinded to scanner type, reviewed images independently to assess resolution of individual electrodes, scalar localization, and severity of image artifact. Serving as the reference standard, micro-CT identified scalar crossover in one specimen; imaging of all remaining cochleae demonstrated complete scala tympani insertions. The 192-slice MDCT scanner exhibited improved resolution of individual electrodes (p < 0.01), superior scalar localization (p < 0.01), and reduced blooming artifact (p < 0.05), compared with conventional 64-slice MDCT. There was no significant difference between platforms when comparing streak or ring artifact. The new generation 192-slice MDCT scanner offers several notable advantages for cochlear implant imaging compared with conventional MDCT. This technology provides important feedback regarding electrode position and course, which may help in future optimization of surgical technique and electrode design.
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Laparoscopy-like operative vaginoscopy: a new approach to manage mesh erosions.
Billone, Valentina; Amorim-Costa, Célia; Campos, Sara; Rabischong, Benoĭt; Bourdel, Nicolas; Canis, Michel; Botchorishvili, Revaz
2015-01-01
Mesh erosion through the vagina is the most common complication of synthetic mesh used for pelvic organ prolapse repair. However, conventional transvaginal mesh excision has many technical limitations. We aimed at creating and describing a new surgical technique for transvaginal removal of exposed mesh that would enable better exposition and access, thus facilitating optimal treatment. A step-by-step video showing the technique. A university tertiary care hospital. Five patients previously submitted to pelvic organ prolapse repair using synthetic mesh, presenting mesh erosion through the vagina. Mesh excision using a laparoscopy-like operative vaginoscopy in which standard laparoscopic instruments are used through a single-incision laparoscopic surgery port device placed in the vagina. In all cases, a very good exposure of the mesh was achieved, a minimal tissue traction was required, and the procedures were performed in a very ergonomic way. All the patients were discharged on the same day of the surgery and had a painless postoperative course. So far, there have been no cases of relapse. This seems to be a simple, cheap, and valuable minimally invasive technique with many advantages in comparison with the conventional approach. More cases and time are necessary to access its long-term efficacy. It may possibly be used for the management of other conditions. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.
Effects of mechanical loading on human mesenchymal stem cells for cartilage tissue engineering.
Choi, Jane Ru; Yong, Kar Wey; Choi, Jean Yu
2018-03-01
Today, articular cartilage damage is a major health problem, affecting people of all ages. The existing conventional articular cartilage repair techniques, such as autologous chondrocyte implantation (ACI), microfracture, and mosaicplasty, have many shortcomings which negatively affect their clinical outcomes. Therefore, it is essential to develop an alternative and efficient articular repair technique that can address those shortcomings. Cartilage tissue engineering, which aims to create a tissue-engineered cartilage derived from human mesenchymal stem cells (MSCs), shows great promise for improving articular cartilage defect therapy. However, the use of tissue-engineered cartilage for the clinical therapy of articular cartilage defect still remains challenging. Despite the importance of mechanical loading to create a functional cartilage has been well demonstrated, the specific type of mechanical loading and its optimal loading regime is still under investigation. This review summarizes the most recent advances in the effects of mechanical loading on human MSCs. First, the existing conventional articular repair techniques and their shortcomings are highlighted. The important parameters for the evaluation of the tissue-engineered cartilage, including chondrogenic and hypertrophic differentiation of human MSCs are briefly discussed. The influence of mechanical loading on human MSCs is subsequently reviewed and the possible mechanotransduction signaling is highlighted. The development of non-hypertrophic chondrogenesis in response to the changing mechanical microenvironment will aid in the establishment of a tissue-engineered cartilage for efficient articular cartilage repair. © 2017 Wiley Periodicals, Inc.
In vitro validation of a shape-optimized fiber-reinforced dental bridge.
Chen, YungChung; Li, Haiyan; Fok, Alex
2011-12-01
To improve its mechanical performance, structural optimization had been used in a previous study to obtain an alternative design for a 3-unit inlay-retained fiber-reinforced composite (FRC) dental bridge. In that study, an optimized layout of the FRC substructure had been proposed to minimize stresses in the veneering composite and interfacial stresses between the composite and substructure. The current work aimed to validate in vitro the improved fracture resistance of the optimized design. All samples for the 3-unit inlay-retained FRC dental bridge were made with glass-fibers (FibreKor) as the substructure, surrounded by a veneering composite (GC Gradia). Two different FRC substructure designs were prepared: a conventional (n=20) and an optimized design (n=21). The conventional design was a straight beam linking one proximal box to the other, while the optimized design was a curved beam following the lower outline of the pontic. All samples were loaded to 400N on a universal test machine (MTS 810) with a loading speed of 0.2mm/min. During loading, the force and displacement were recorded. Meanwhile, a two-channel acoustic emission (AE) system was used to monitor the development of cracks during loading. The load-displacement curves of the two groups displayed significant differences. For the conventional design, there were numerous drops in load corresponding to local damage of the sample. For the optimized design, the load curves were much smoother. Cracks were clearly visible on the surface of the conventional group only, and the directions of those cracks were perpendicular to those of the most tensile stresses. Results from the more sensitive AE measurement also showed that the optimized design had, on average, fewer cracking events: 38 versus 2969 in the conventional design. The much lower number of AE events and smoother load-displacement curves indicated that the optimized FRC bridge design had a higher fracture resistance. It is expected that the optimized design will significantly improve the clinical performance of FRC bridges. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Memon, Sarfaraz
2014-12-01
A stable centric occlusal position that shows no evidence of occlusal disease should not be altered. Confirmative restorative dentistry deals with making restorations that are in harmony with existing jaw relations. Conventional techniques for construction have been unsuccessful in producing a prosthesis that can be inserted without minor intraoral occlusal adjustment. This study was conducted to evaluate the benefits of the double casting technique with FGP over the conventional casting technique. Ten patients with root canal treated maxillary molar were selected for the fabrication of metal crown. Two techniques, one involving the conventional fabrication and other using functionally generated path with double casting were used to fabricate the prosthesis. A comparison based on various parameters which was done between the two techniques. The change in the height of castings for the double casting group was less compared to the conventional group and was highly statistically significant (P < 0.001). The time taken for occlusal correction was significantly lower in double casting group than the conventional group (P < 0.001). The patient satisfaction (before occlusal correction) indicated better satisfaction for double casting group compared to conventional (P < 0.01). The functionally generated path with double casting technique resulted in castings which had better dimensional accuracy, less occlusal correction and better patient satisfaction compared to the conventional castings.
Robust Optimization Design Algorithm for High-Frequency TWTs
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Chevalier, Christine T.
2010-01-01
Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.
Mittal, Vineet; Nanda, Arun
2017-12-01
Marrubium vulgare Linn (Lamiaceae) was generally extracted by conventional methods with low yield of marrubiin; these processes were not considered environment friendly. This study extracts the whole plant of M. vulgare by microwave assisted extraction (MAE) and optimizes the effect of various extraction parameters on the marrubiin yield by using Central Composite Design (CCD). The selected medicinal plant was extracted using ethanol: water (1:1) as solvent by MAE. The plant material was also extracted using a Soxhlet and the various extracts were analyzed by HPTLC to quantify the marrubiin concentration. The optimized conditions for the microwave-assisted extraction of selected medicinal plant was microwave power of 539 W, irradiation time of 373 s and solvent to drug ratio, 32 mL per g of the drug. The marrubiin concentration in MAE almost doubled relative to the traditional method (0.69 ± 0.08 to 1.35 ± 0.04%). The IC 50 for DPPH was reduced to 66.28 ± 0.6 μg/mL as compared to conventional extract (84.14 ± 0.7 μg/mL). The scanning electron micrographs of the treated and untreated drug samples further support the results. The CCD can be successfully applied to optimize the extraction parameters (MAE) for M. vulgare. Moreover, in terms of environmental impact, the MAE technique could be assumed as a 'Green approach' because the MAE approach for extraction of plant released only 92.3 g of CO 2 as compared to 3207.6 g CO 2 using the Soxhlet method of extraction.
Zhu, Chengcheng; Tian, Bing; Chen, Luguang; Eisenmenger, Laura; Raithel, Esther; Forman, Christoph; Ahn, Sinyeob; Laub, Gerhard; Liu, Qi; Lu, Jianping; Liu, Jing; Hess, Christopher; Saloner, David
2018-06-01
Develop and optimize an accelerated, high-resolution (0.5 mm isotropic) 3D black blood MRI technique to reduce scan time for whole-brain intracranial vessel wall imaging. A 3D accelerated T 1 -weighted fast-spin-echo prototype sequence using compressed sensing (CS-SPACE) was developed at 3T. Both the acquisition [echo train length (ETL), under-sampling factor] and reconstruction parameters (regularization parameter, number of iterations) were first optimized in 5 healthy volunteers. Ten patients with a variety of intracranial vascular disease presentations (aneurysm, atherosclerosis, dissection, vasculitis) were imaged with SPACE and optimized CS-SPACE, pre and post Gd contrast. Lumen/wall area, wall-to-lumen contrast ratio (CR), enhancement ratio (ER), sharpness, and qualitative scores (1-4) by two radiologists were recorded. The optimized CS-SPACE protocol has ETL 60, 20% k-space under-sampling, 0.002 regularization factor with 20 iterations. In patient studies, CS-SPACE and conventional SPACE had comparable image scores both pre- (3.35 ± 0.85 vs. 3.54 ± 0.65, p = 0.13) and post-contrast (3.72 ± 0.58 vs. 3.53 ± 0.57, p = 0.15), but the CS-SPACE acquisition was 37% faster (6:48 vs. 10:50). CS-SPACE agreed with SPACE for lumen/wall area, ER measurements and sharpness, but marginally reduced the CR. In the evaluation of intracranial vascular disease, CS-SPACE provides a substantial reduction in scan time compared to conventional T 1 -weighted SPACE while maintaining good image quality.
Yuzbasioglu, Emir; Kurt, Hanefi; Turunc, Rana; Bilir, Halenur
2014-01-30
The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects' attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques.
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Clinical knowledge-based inverse treatment planning
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-11-01
Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.
NASA Astrophysics Data System (ADS)
Hu, Dong; Lu, Renfu; Ying, Yibin
2018-03-01
This research was aimed at optimizing the inverse algorithm for estimating the optical absorption (μa) and reduced scattering (μs‧) coefficients from spatial frequency domain diffuse reflectance. Studies were first conducted to determine the optimal frequency resolution and start and end frequencies in terms of the reciprocal of mean free path (1/mfp‧). The results showed that the optimal frequency resolution increased with μs‧ and remained stable when μs‧ was larger than 2 mm-1. The optimal end frequency decreased from 0.3/mfp‧ to 0.16/mfp‧ with μs‧ ranging from 0.4 mm-1 to 3 mm-1, while the optimal start frequency remained at 0 mm-1. A two-step parameter estimation method was proposed based on the optimized frequency parameters, which improved estimation accuracies by 37.5% and 9.8% for μa and μs‧, respectively, compared with the conventional one-step method. Experimental validations with seven liquid optical phantoms showed that the optimized algorithm resulted in the mean absolute errors of 15.4%, 7.6%, 5.0% for μa and 16.4%, 18.0%, 18.3% for μs‧ at the wavelengths of 675 nm, 700 nm, and 715 nm, respectively. Hence, implementation of the optimized parameter estimation method should be considered in order to improve the measurement of optical properties of biological materials when using spatial frequency domain imaging technique.
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad
2014-01-01
Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.
TOPICAL REVIEW: Digital x-ray tomosynthesis: current state of the art and clinical potential
NASA Astrophysics Data System (ADS)
Dobbins, James T., III; Godfrey, Devon J.
2003-10-01
Digital x-ray tomosynthesis is a technique for producing slice images using conventional x-ray systems. It is a refinement of conventional geometric tomography, which has been known since the 1930s. In conventional geometric tomography, the x-ray tube and image receptor move in synchrony on opposite sides of the patient to produce a plane of structures in sharp focus at the plane containing the fulcrum of the motion; all other structures above and below the fulcrum plane are blurred and thus less visible in the resulting image. Tomosynthesis improves upon conventional geometric tomography in that it allows an arbitrary number of in-focus planes to be generated retrospectively from a sequence of projection radiographs that are acquired during a single motion of the x-ray tube. By shifting and adding these projection radiographs, specific planes may be reconstructed. This topical review describes the various reconstruction algorithms used to produce tomosynthesis images, as well as approaches used to minimize the residual blur from out-of-plane structures. Historical background and mathematical details are given for the various approaches described. Approaches for optimizing the tomosynthesis image are given. Applications of tomosynthesis to various clinical tasks, including angiography, chest imaging, mammography, dental imaging and orthopaedic imaging, are also described.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.
Hellerhoff, K
2010-11-01
In recent years digital full field mammography has increasingly replaced conventional film mammography. High quality imaging is guaranteed by high quantum efficiency and very good contrast resolution with optimized dosing even for women with dense glandular tissue. However, digital mammography remains a projection procedure by which overlapping tissue limits the detectability of subtle alterations. Tomosynthesis is a procedure developed from digital mammography for slice examination of breasts which eliminates the effects of overlapping tissue and allows 3D imaging of breasts. A curved movement of the X-ray tube during scanning allows the acquisition of many 2D images from different angles. Subseqently, reconstruction algorithms employing a shift and add method improve the recognition of details at a defined level and at the same time eliminate smear artefacts due to overlapping structures. The total dose corresponds to that of conventional mammography imaging. The technical procedure, including the number of levels, suitable anodes/filter combinations, angle regions of images and selection of reconstruction algorithms, is presently undergoing optimization. Previous studies on the clinical value of tomosynthesis have examined screening parameters, such as recall rate and detection rate as well as information on tumor extent for histologically proven breast tumors. More advanced techniques, such as contrast medium-enhanced tomosynthesis, are presently under development and dual-energy imaging is of particular importance.
Yao, T-T; Wang, L-K; Cheng, J-L; Hu, Y-Z; Zhao, J-H; Zhu, G-N
2015-03-01
A new approach employing a combination of pyrethroid and repellent is proposed to improve the protective efficacy of conventional pyrethroid-treated fabrics against mosquito vectors. In this context, the insecticidal and repellent efficacies of commonly used pyrethroids and repellents were evaluated by cone tests and arm-in-cage tests against Stegomyia albopicta (=Aedes albopictus) (Diptera: Culicidae). At concentrations of LD50 (estimated for pyrethroid) or ED50 (estimated for repellent), respectively, the knock-down effects of the pyrethroids or repellents were further compared. The results obtained indicated that deltamethrin and DEET were relatively more effective and thus these were selected for further study. Synergistic interaction was observed between deltamethrin and DEET at the ratios of 5 : 1, 2 : 1, 1 : 1 and 1 : 2 (but not 1 : 5). An optimal mixing ratio of 7 : 5 was then microencapsulated and adhered to fabrics using a fixing agent. Fabrics impregnated by microencapsulated mixtures gained extended washing durability compared with those treated with a conventional dipping method. Results indicated that this approach represents a promising method for the future impregnation of bednet, curtain and combat uniform materials. © 2014 The Royal Entomological Society.
Douglas, Ivor S
2017-05-01
Diagnosis of pulmonary infection, including hospital-acquired pneumonia (HAP) and ventilator-associated pneumonia (VAP) in the critically ill patient remains a common and therapeutically challenging diagnosis with significant attributable morbidity, mortality, and cost. Current clinical approaches to surveillance, early detection and, conventional culture-based microbiology are inadequate for optimal targeted antibiotic treatment and stewardship. Efforts to enhance diagnosis of HAP and VAP and the impact of these novel approaches on rational antimicrobial selection and stewardship are the focus of recent studies reviewed here. Recent consensus guidelines for diagnosis and management of HAP and VAP are relatively silent on the potential role of novel rapid microbiological techniques and reply heavily on conventional culture strategies of noninvasively obtained (including endotracheal aspirate samples). Novel rapid microbiological diagnostics, including nucleic acid amplification, mass spectrometry, and fluorescence microscopy-based technologies are promising approaches for the future. Exhaled breath biomarkers, including measurement of VOC represent a future approach. Further validation of novel diagnostic technology platforms will be required to evaluate their utility for enhancing diagnosis and guiding treatment of pulmonary infections in the critically ill. However, the integration of novel diagnostics for rapid microbial identification, resistance phenotyping, and antibiotic sensitivity testing into usual care practice could significantly transform the care of patients and potentially inform improved targeted antimicrobial selection, de-escalation, and stewardship.
Kremen, Arie; Tsompanakis, Yiannis
2010-04-01
The slope-stability of a proposed vertical extension of a balefill was investigated in the present study, in an attempt to determine a geotechnically conservative design, compliant with New Jersey Department of Environmental Protection regulations, to maximize the utilization of unclaimed disposal capacity. Conventional geotechnical analytical methods are generally limited to well-defined failure modes, which may not occur in landfills or balefills due to the presence of preferential slip surfaces. In addition, these models assume an a priori stress distribution to solve essentially indeterminate problems. In this work, a different approach has been applied, which avoids several of the drawbacks of conventional methods. Specifically, the analysis was performed in a two-stage process: (a) calculation of stress distribution, and (b) application of an optimization technique to identify the most probable failure surface. The stress analysis was performed using a finite element formulation and the location of the failure surface was located by dynamic programming optimization method. A sensitivity analysis was performed to evaluate the effect of the various waste strength parameters of the underlying mathematical model on the results, namely the factor of safety of the landfill. Although this study focuses on the stability investigation of an expanded balefill, the methodology presented can easily be applied to general geotechnical investigations.
Methodology and method and appartus for signaling with capacity optimized constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel.
Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2017-01-01
Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel.
Energy management and cooperation in microgrids
NASA Astrophysics Data System (ADS)
Rahbar, Katayoun
Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.
Liang, Xinshu; Gao, Yinan; Zhang, Xiaoying; Tian, Yongqiang; Zhang, Zhenxian; Gao, Lihong
2014-01-01
Inappropriate and excessive irrigation and fertilization have led to the predominant decline of crop yields, and water and fertilizer use efficiency in intensive vegetable production systems in China. For many vegetables, fertigation can be applied daily according to the actual water and nutrient requirement of crops. A greenhouse study was therefore conducted to investigate the effect of daily fertigation on migration of water and salt in soil, and root growth and fruit yield of cucumber. The treatments included conventional interval fertigation, optimal interval fertigation and optimal daily fertigation. Generally, although soil under the treatment optimal interval fertigation received much lower fertilizers than soil under conventional interval fertigation, the treatment optimal interval fertigation did not statistically decrease the economic yield and fruit nutrition quality of cucumber when compare to conventional interval fertigation. In addition, the treatment optimal interval fertigation effectively avoided inorganic nitrogen accumulation in soil and significantly (P<0.05) increased the partial factor productivity of applied nitrogen by 88% and 209% in the early-spring and autumn-winter seasons, respectively, when compared to conventional interval fertigation. Although soils under the treatments optimal interval fertigation and optimal daily fertigation received the same amount of fertilizers, the treatment optimal daily fertigation maintained the relatively stable water, electrical conductivity and mineral nitrogen levels in surface soils, promoted fine root (<1.5 mm diameter) growth of cucumber, and eventually increased cucumber economic yield by 6.2% and 8.3% and partial factor productivity of applied nitrogen by 55% and 75% in the early-spring and autumn-winter seasons, respectively, when compared to the treatment optimal interval fertigation. These results suggested that optimal daily fertigation is a beneficial practice for improving crop yield and the water and fertilizers use efficiency in solar greenhouse.
Liang, Xinshu; Gao, Yinan; Zhang, Xiaoying; Tian, Yongqiang; Zhang, Zhenxian; Gao, Lihong
2014-01-01
Inappropriate and excessive irrigation and fertilization have led to the predominant decline of crop yields, and water and fertilizer use efficiency in intensive vegetable production systems in China. For many vegetables, fertigation can be applied daily according to the actual water and nutrient requirement of crops. A greenhouse study was therefore conducted to investigate the effect of daily fertigation on migration of water and salt in soil, and root growth and fruit yield of cucumber. The treatments included conventional interval fertigation, optimal interval fertigation and optimal daily fertigation. Generally, although soil under the treatment optimal interval fertigation received much lower fertilizers than soil under conventional interval fertigation, the treatment optimal interval fertigation did not statistically decrease the economic yield and fruit nutrition quality of cucumber when compare to conventional interval fertigation. In addition, the treatment optimal interval fertigation effectively avoided inorganic nitrogen accumulation in soil and significantly (P<0.05) increased the partial factor productivity of applied nitrogen by 88% and 209% in the early-spring and autumn-winter seasons, respectively, when compared to conventional interval fertigation. Although soils under the treatments optimal interval fertigation and optimal daily fertigation received the same amount of fertilizers, the treatment optimal daily fertigation maintained the relatively stable water, electrical conductivity and mineral nitrogen levels in surface soils, promoted fine root (<1.5 mm diameter) growth of cucumber, and eventually increased cucumber economic yield by 6.2% and 8.3% and partial factor productivity of applied nitrogen by 55% and 75% in the early-spring and autumn-winter seasons, respectively, when compared to the treatment optimal interval fertigation. These results suggested that optimal daily fertigation is a beneficial practice for improving crop yield and the water and fertilizers use efficiency in solar greenhouse. PMID:24475204
NASA Astrophysics Data System (ADS)
Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza
2017-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less
Trajectory optimization for dynamic couch rotation during volumetric modulated arc radiotherapy
NASA Astrophysics Data System (ADS)
Smyth, Gregory; Bamber, Jeffrey C.; Evans, Philip M.; Bedford, James L.
2013-11-01
Non-coplanar radiation beams are often used in three-dimensional conformal and intensity modulated radiotherapy to reduce dose to organs at risk (OAR) by geometric avoidance. In volumetric modulated arc radiotherapy (VMAT) non-coplanar geometries are generally achieved by applying patient couch rotations to single or multiple full or partial arcs. This paper presents a trajectory optimization method for a non-coplanar technique, dynamic couch rotation during VMAT (DCR-VMAT), which combines ray tracing with a graph search algorithm. Four clinical test cases (partial breast, brain, prostate only, and prostate and pelvic nodes) were used to evaluate the potential OAR sparing for trajectory-optimized DCR-VMAT plans, compared with standard coplanar VMAT. In each case, ray tracing was performed and a cost map reflecting the number of OAR voxels intersected for each potential source position was generated. The least-cost path through the cost map, corresponding to an optimal DCR-VMAT trajectory, was determined using Dijkstra’s algorithm. Results show that trajectory optimization can reduce dose to specified OARs for plans otherwise comparable to conventional coplanar VMAT techniques. For the partial breast case, the mean heart dose was reduced by 53%. In the brain case, the maximum lens doses were reduced by 61% (left) and 77% (right) and the globes by 37% (left) and 40% (right). Bowel mean dose was reduced by 15% in the prostate only case. For the prostate and pelvic nodes case, the bowel V50 Gy and V60 Gy were reduced by 9% and 45% respectively. Future work will involve further development of the algorithm and assessment of its performance over a larger number of cases in site-specific cohorts.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
Lin, Zhichao; Guo, Zexiong; Qiu, Lin; Yang, Wanyoug; Lin, Mingxia
2016-12-01
Background To extend the time window for thrombolysis, reducing the time for diagnosis and detection of acute cerebral infarction seems to be warranted. Purpose To evaluate the feasibility of implementing an array spatial sensitivity technique (ASSET)-echo-planar imaging (EPI)-fluid attenuated inversion recovery (FLAIR) (AE-FLAIR) sequence into an acute cerebral infarction magnetic resonance (MR) evaluation protocol, and to assess the diagnostic value of AE-FLAIR combined with three-dimensional time-of-flight MR angiography (3D TOF MRA). Material and Methods A total of 100 patients (68 men, 32 women; age range, 44-82 years) with acute cerebral infarction, including 50 consecutive uncooperative and 50 cooperative patients, were evaluated with T1-weighted (T1W) imaging, T2-weighted (T2W) imaging, FLAIR, diffusion-weighted imaging (DWI), 3D TOF, EPI-FLAIR, and AE-FLAIR. Conventional FLAIR, EPI-FLAIR, and AE-FLAIR were assessed by two observers independently for image quality. The optimized group (AE-FLAIR and 3D TOF) and the control group (T1W imaging, T2W imaging, conventional FLAIR, DWI, and 3D TOF) were compared for evaluation time and diagnostic accuracy. Results One hundred and twenty-five lesions were detected and images having adequate diagnostic image quality were in 73% of conventional FLAIR, 62% of EPI-FLAIR, and 89% of AE-FLAIR. The detection time was 12 ± 1 min with 76% accuracy and 4 ± 0.5 min with 100% accuracy in the control and the optimized groups, respectively. Inter-observer agreements of κ = 0.78 and κ = 0.81 were for the optimized group and control group, respectively. Conclusion With reduced acquisition time and better image quality, AE-FLAIR combined with 3D TOF may be used as a rapid diagnosis tool in patients with acute cerebral infarction, especially in uncooperative patients.
NASA Astrophysics Data System (ADS)
Yaroslavsky, Ilya; Boutoussov, Dmitri; Vybornov, Alexander; Perchuk, Igor; Meleshkevich, Val; Altshuler, Gregory
2018-02-01
Until recently, Laser Diodes (LD) have been limited in their ability to deliver high peak power levels, which, in turn, limited their clinical capabilities. New technological developments made possible advent of "super pulse" LD (SPLD). Moreover, advanced means of smart thermal feedback enable precise control of laser power, thus ensuring safe and optimally efficacious application. In this work, we have evaluated a prototype SPLD system ex vivo. The device provided up to 25 W average and up to 150 W pulse power at 940 nm wavelength. The laser was operated in the thermal feedback-controlled mode, where power of the laser was varied automatically as a function of real-time thermal feedback to maintain constant tip temperature. The system was also equipped with a fiber tip initiated with advanced TiO2 /tungsten technique. Evaluation methods were designed to assess: 1) Speed and depth of cutting; 2) Dimensions of coagulative margin. The SPLD system was compared with industry-leading conventional diode and CO2 devices. The results indicate that the SPLD system provides increase in speed of controlled cutting by a factor of >2 in comparison with the conventional diode laser and approaching that of CO2 device. The produced ratio of the depth of cut to the thermal damage margin was significantly higher than conventional diodes and close to that of the CO2 system, suggesting optimal hemostasis conditions. SPLD technology with real-time temperature control has a potential for creating a new standard of care in the field of precision soft tissue surgery.
Potential for Imaging Engineered Tissues with X-Ray Phase Contrast
Appel, Alyssa; Anastasio, Mark A.
2011-01-01
As the field of tissue engineering advances, it is crucial to develop imaging methods capable of providing detailed three-dimensional information on tissue structure. X-ray imaging techniques based on phase-contrast (PC) have great potential for a number of biomedical applications due to their ability to provide information about soft tissue structure without exogenous contrast agents. X-ray PC techniques retain the excellent spatial resolution, tissue penetration, and calcified tissue contrast of conventional X-ray techniques while providing drastically improved imaging of soft tissue and biomaterials. This suggests that X-ray PC techniques are very promising for evaluation of engineered tissues. In this review, four different implementations of X-ray PC imaging are described and applications to tissues of relevance to tissue engineering reviewed. In addition, recent applications of X-ray PC to the evaluation of biomaterial scaffolds and engineered tissues are presented and areas for further development and application of these techniques are discussed. Imaging techniques based on X-ray PC have significant potential for improving our ability to image and characterize engineered tissues, and their continued development and optimization could have significant impact on the field of tissue engineering. PMID:21682604
3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2017-01-01
High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings. PMID:28819645
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depauw, N; Patel, S; MacDonald, S
Purpose: Deep inspiration breath-hold techniques (DIBH) have been shown to carry significant dosimetric advantages in conventional radiotherapy of left-sided breast cancer. The purpose of this study is to evaluate the use of DIBH techniques for post-mastectomy radiation therapy (PMRT) using proton pencil beam scanning (PBS). Method: Ten PMRT patients, with or without breast implant, underwent two helical CT scans: one with free breathing and the other with deep inspiration breath-hold. A prescription of 50.4 Gy(RBE) to the whole chest wall and lymphatics (axillary, supraclavicular, and intramammary nodes) was considered. PBS plans were generated for each patient’s CT scan using Astroid,more » an in-house treatment planning system, with the institution conventional clinical PMRT parameters; that is, using a single en-face field with a spot size varying from 8 mm to 14 mm as a function of energy. Similar optimization parameters were used in both plans in order to ensure appropriate comparison. Results: Regardless of the technique (free breathing or DIBH), the generated plans were well within clinical acceptability. DIBH allowed for higher target coverage with better sparing of the cardiac structures. The lung doses were also slightly improved. While the use of DIBH techniques might be of interest, it is technically challenging as it would require a fast PBS delivery, as well as the synchronization of the beam delivery with a gating system, both of which are not currently available at the institution. Conclusion: DIBH techniques display some dosimetric advantages over free breathing treatment for PBS PMRT patients, which warrants further investigation. Plans will also be generated with smaller spot sizes (2.5 mm to 5.5 mm and 5 mm to 9 mm), corresponding to new generation machines, in order to further quantify the dosimetric advantages of DIBH as a function of spot size.« less
Fahimian, Benjamin; Yu, Victoria; Horst, Kathleen; Xing, Lei; Hristov, Dimitre
2013-12-01
External beam radiation therapy (EBRT) provides a non-invasive treatment alternative for accelerated partial breast irradiation (APBI), however, limitations in achievable dose conformity of current EBRT techniques have been correlated to reported toxicity. To enhance the conformity of EBRT APBI, a technique for conventional LINACs is developed, which through combined motion of the couch, intensity modulated delivery, and a prone breast setup, enables wide-angular coronal arc irradiation of the ipsilateral breast without irradiating through the thorax and contralateral breast. A couch trajectory optimization technique was developed to determine the trajectories that concurrently avoid collision with the LINAC and maintain the target within the MLC apertures. Inverse treatment planning was performed along the derived trajectory. The technique was experimentally implemented by programming the Varian TrueBeam™ STx in Developer Mode. The dosimetric accuracy of the delivery was evaluated by ion chamber and film measurements in phantom. The resulting optimized trajectory was shown to be necessarily non-isocentric, and contain both translation and rotations of the couch. Film measurements resulted in 93% of the points in the measured two-dimensional dose maps passing the 3%/3mm Gamma criterion. Preliminary treatment plan comparison to 5-field 3D-conformal, IMRT, and VMAT demonstrated enhancement in conformity, and reduction of the normal tissue V50% and V100% parameters that have been correlated with EBRT toxicity. The feasibility of wide-angular intensity modulated partial breast irradiation using motion of the couch has been demonstrated experimentally on a standard LINAC for the first time. For patients eligible for a prone setup, the technique may enable improvement of dose conformity and associated dose-volume parameters correlated with toxicity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
2014-01-01
Background The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Methods Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects’ attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. Results There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Conclusions Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques. PMID:24479892
NASA Astrophysics Data System (ADS)
Amjad, M.; Salam, Z.; Ishaque, K.
2014-04-01
In order to design an efficient resonant power supply for ozone gas generator, it is necessary to accurately determine the parameters of the ozone chamber. In the conventional method, the information from Lissajous plot is used to estimate the values of these parameters. However, the experimental setup for this purpose can only predict the parameters at one operating frequency and there is no guarantee that it results in the highest ozone gas yield. This paper proposes a new approach to determine the parameters using a search and optimization technique known as Differential Evolution (DE). The desired objective function of DE is set at the resonance condition and the chamber parameter values can be searched regardless of experimental constraints. The chamber parameters obtained from the DE technique are validated by experiment.
Interferometric at-wavelength flare characterization of EUV optical systems
Naulleau, Patrick P.; Goldberg, Kenneth Alan
2001-01-01
The extreme ultraviolet (EUV) phase-shifting point diffraction interferometer (PS/PDI) provides the high-accuracy wavefront characterization critical to the development of EUV lithography systems. Enhancing the implementation of the PS/PDI can significantly extend its spatial-frequency measurement bandwidth. The enhanced PS/PDI is capable of simultaneously characterizing both wavefront and flare. The enhanced technique employs a hybrid spatial/temporal-domain point diffraction interferometer (referred to as the dual-domain PS/PDI) that is capable of suppressing the scattered-reference-light noise that hinders the conventional PS/PDI. Using the dual-domain technique in combination with a flare-measurement-optimized mask and an iterative calculation process for removing flare contribution caused by higher order grating diffraction terms, the enhanced PS/PDI can be used to simultaneously measure both figure and flare in optical systems.
An evaluation of student and clinician perception of digital and conventional implant impressions.
Lee, Sang J; Macarthur, Robert X; Gallucci, German O
2013-11-01
The accuracy and efficiency of digital implant impressions should match conventional impressions. Comparisons should be made with clinically relevant data. The purpose of this study was to evaluate the difficulty level and operator's perception between dental students and experienced clinicians when making digital and conventional implant impressions. Thirty experienced dental professionals and 30 second-year dental students made conventional and digital impressions of a single implant model. A visual analog scale (VAS) and multiple-choice questionnaires were used to assess the participant's perception of difficulty, preference, and effectiveness. Wilcoxon signed-rank test within the groups and Wilcoxon rank-sum test between the groups were used for statistical analysis (α=.05). On a 0 to 100 VAS, the student group scored a mean difficulty level of 43.1 (±18.5) for the conventional impression technique and 30.6 (±17.6) for the digital impression technique (P=.006). The clinician group scored a mean (standard deviation) difficulty level of 30.9 (±19.6) for conventional impressions and 36.5 (±20.6) for digital impressions (P=.280). Comparison between groups showed a mean difficulty level with the conventional impression technique significantly higher in the student group (P=.030). The digital impression was not significantly different between the groups (P=.228). Sixty percent of the students preferred the digital impression and 7% the conventional impression; 33% expressed no preference. In the clinician group, 33% preferred the digital impression and 37% the conventional impression; 30% had no preference. Seventy-seven percent of the student group felt most effective with digital impressions, 10% with conventional impressions, and 13% with either technique, whereas 40% of the clinician group chose the digital impression as the most effective technique, 53% the conventional impression, and 7% either technique. The conventional impression was more difficult to perform for the student group than the clinician group; however, the difficulty level of the digital impression was the same in both groups. It was also determined that the student group preferred the digital impression as the most efficient impression technique, and the clinician group had an even distribution in the choice of preferred and efficient impression techniques. Copyright © 2013 Editorial Council for the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Lee, Seung Hyun; Lee, Young Han; Hahn, Seok; Yang, Jaemoon; Song, Ho-Taek; Suh, Jin-Suck
2017-01-01
Background Synthetic magnetic resonance imaging (MRI) allows reformatting of various synthetic images by adjustment of scanning parameters such as repetition time (TR) and echo time (TE). Optimized MR images can be reformatted from T1, T2, and proton density (PD) values to achieve maximum tissue contrast between joint fluid and adjacent soft tissue. Purpose To demonstrate the method for optimization of TR and TE by synthetic MRI and to validate the optimized images by comparison with conventional shoulder MR arthrography (MRA) images. Material and Methods Thirty-seven shoulder MRA images acquired by synthetic MRI were retrospectively evaluated for PD, T1, and T2 values at the joint fluid and glenoid labrum. Differences in signal intensity between the fluid and labrum were observed between TR of 500-6000 ms and TE of 80-300 ms in T2-weighted (T2W) images. Conventional T2W and synthetic images were analyzed for diagnostic agreement of supraspinatus tendon abnormalities (kappa statistics) and image quality scores (one-way analysis of variance with post-hoc analysis). Results Optimized mean values of TR and TE were 2724.7 ± 1634.7 and 80.1 ± 0.4, respectively. Diagnostic agreement for supraspinatus tendon abnormalities between conventional and synthetic MR images was excellent (κ = 0.882). The mean image quality score of the joint space in optimized synthetic images was significantly higher compared with those in conventional and synthetic images (2.861 ± 0.351 vs. 2.556 ± 0.607 vs. 2.750 ± 0.439; P < 0.05). Conclusion Synthetic MRI with optimized TR and TE for shoulder MRA enables optimization of soft-tissue contrast.
Preparation of Highly Conductive Yarns by an Optimized Impregnation Process
NASA Astrophysics Data System (ADS)
Amba Sankar, K. N.; Mohanta, Kallol
2017-12-01
We report the development of the electrical conductivity in textile yarns through impregnation and post-treatment of poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS). The conductive polymer is deposited on fibers, which fills the gap space within the hierarchical structure of the yarns. Organic nonpolar solvents act as reducing agent to increase the density of PEDOT moieties on the yarns, galvanizing increment in conductivity values. Post-treatment by ethylene glycol transforms the resonance configuration of the conductive moieties of conjugated polymer, which helps in further enhancement of electrical conductivity of the yarns. We have optimized the method in terms of loading and conformal change of the polymer to have a lesser resistance of the coated conductive yarns. The minimum resistance achieved has a value of 77 Ωcm-1. This technique of developing conductivity in conventional yarns enables retaining the flexibility of yarns and feeling of softness which would find suitable applications for wearable electronics.
The future of human DNA vaccines
Li, Lei; Saade, Fadi; Petrovsky, Nikolai
2012-01-01
DNA vaccines have evolved greatly over the last 20 years since their invention, but have yet to become a competitive alternative to conventional protein or carbohydrate based human vaccines. Whilst safety concerns were an initial barrier, the Achilles heel of DNA vaccines remains their poor immunogenicity when compared to protein vaccines. A wide variety of strategies have been developed to optimize DNA vaccine immunogenicity, including codon optimization, genetic adjuvants, electroporation and sophisticated prime-boost regimens, with each of these methods having its advantages and limitations. Whilst each of these methods has contributed to incremental improvements in DNA vaccine efficacy, more is still needed if human DNA vaccines are to succeed commercially. This review foresees a final breakthrough in human DNA vaccines will come from application of the latest cutting-edge technologies, including “epigenetics” and “omics” approaches, alongside traditional techniques to improve immunogenicity such as adjuvants and electroporation, thereby overcoming the current limitations of DNA vaccines in humans PMID:22981627
Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness
Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.
2015-01-01
A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057
Ghost artifact cancellation using phased array processing.
Kellman, P; McVeigh, E R
2001-08-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.
Ghost Artifact Cancellation Using Phased Array Processing
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638
Preparation of Highly Conductive Yarns by an Optimized Impregnation Process
NASA Astrophysics Data System (ADS)
Amba Sankar, K. N.; Mohanta, Kallol
2018-03-01
We report the development of the electrical conductivity in textile yarns through impregnation and post-treatment of poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS). The conductive polymer is deposited on fibers, which fills the gap space within the hierarchical structure of the yarns. Organic nonpolar solvents act as reducing agent to increase the density of PEDOT moieties on the yarns, galvanizing increment in conductivity values. Post-treatment by ethylene glycol transforms the resonance configuration of the conductive moieties of conjugated polymer, which helps in further enhancement of electrical conductivity of the yarns. We have optimized the method in terms of loading and conformal change of the polymer to have a lesser resistance of the coated conductive yarns. The minimum resistance achieved has a value of 77 Ωcm-1. This technique of developing conductivity in conventional yarns enables retaining the flexibility of yarns and feeling of softness which would find suitable␣applications for wearable electronics.
Thermosonication and optimization of stingless bee honey processing.
Chong, K Y; Chin, N L; Yusof, Y A
2017-10-01
The effects of thermosonication on the quality of a stingless bee honey, the Kelulut, were studied using processing temperature from 45 to 90 ℃ and processing time from 30 to 120 minutes. Physicochemical properties including water activity, moisture content, color intensity, viscosity, hydroxymethylfurfural content, total phenolic content, and radical scavenging activity were determined. Thermosonication reduced the water activity and moisture content by 7.9% and 16.6%, respectively, compared to 3.5% and 6.9% for conventional heating. For thermosonicated honey, color intensity increased by 68.2%, viscosity increased by 275.0%, total phenolic content increased by 58.1%, and radical scavenging activity increased by 63.0% when compared to its raw form. The increase of hydroxymethylfurfural to 62.46 mg/kg was still within the limits of international standards. Optimized thermosonication conditions using response surface methodology were predicted at 90 ℃ for 111 minutes. Thermosonication was revealed as an effective alternative technique for honey processing.
Alien Genetic Algorithm for Exploration of Search Space
NASA Astrophysics Data System (ADS)
Patel, Narendra; Padhiyar, Nitin
2010-10-01
Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.
Rapid and Facile Microwave-Assisted Surface Chemistry for Functionalized Microarray Slides
Lee, Jeong Heon; Hyun, Hoon; Cross, Conor J.; Henary, Maged; Nasr, Khaled A.; Oketokoun, Rafiou; Choi, Hak Soo; Frangioni, John V.
2011-01-01
We describe a rapid and facile method for surface functionalization and ligand patterning of glass slides based on microwave-assisted synthesis and a microarraying robot. Our optimized reaction enables surface modification 42-times faster than conventional techniques and includes a carboxylated self-assembled monolayer, polyethylene glycol linkers of varying length, and stable amide bonds to small molecule, peptide, or protein ligands to be screened for binding to living cells. We also describe customized slide racks that permit functionalization of 100 slides at a time to produce a cost-efficient, highly reproducible batch process. Ligand spots can be positioned on the glass slides precisely using a microarraying robot, and spot size adjusted for any desired application. Using this system, we demonstrate live cell binding to a variety of ligands and optimize PEG linker length. Taken together, the technology we describe should enable high-throughput screening of disease-specific ligands that bind to living cells. PMID:23467787
Effect of Process Parameter on Barium Titanate Stannate (BTS) Materials Sintered at Low Sintering
NASA Astrophysics Data System (ADS)
Shukla, Alok; Bajpai, P. K.
2011-11-01
Ba(Ti1-xSnx)O3 solid solutions with (x = 0.15, 0.20, 0.30 and 0.40) are synthesized using conventional solid state reaction method. Formation of solid solutions in the range 0 ≤ x ≤0.40 is confirmed using X-ray diffraction technique. Single phase solid solutions with homogeneous grain distribution are observed at relatively low sintering by controlling process parameters viz. sintering time. Composition at optimized temperature (1150 °C) sintered by varying the sintering time, stabilize in cubic perovskite phase. The % experimental density increase with increasing the time of sintering instead of increasing sintering temperature. The lattice parameter increases by increasing the tin composition in the material. This demonstrates that process parameter optimization can lead to single phase at relatively lower sintering-a major advantage for the materials used as capacitor element in MLCC.
Effective 2D-3D medical image registration using Support Vector Machine.
Qi, Wenyuan; Gu, Lixu; Zhao, Qiang
2008-01-01
Registration of pre-operative 3D volume dataset and intra-operative 2D images gradually becomes an important technique to assist radiologists in diagnosing complicated diseases easily and quickly. In this paper, we proposed a novel 2D/3D registration framework based on Support Vector Machine (SVM) to compensate the disadvantages of generating large number of DRR images in the stage of intra-operation. Estimated similarity metric distribution could be built up from the relationship between parameters of transform and prior sparse target metric values by means of SVR method. Based on which, global optimal parameters of transform are finally searched out by an optimizer in order to guide 3D volume dataset to match intra-operative 2D image. Experiments reveal that our proposed registration method improved performance compared to conventional registration method and also provided a precise registration result efficiently.
Lee, Chang Jun
2015-01-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.
Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces
NASA Astrophysics Data System (ADS)
Qian, Guo-Xin
This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments.
Optimal control of malaria: combining vector interventions and drug therapies.
Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B
2018-04-24
The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks
Costa, Daniel G.; Guedes, Luiz Affonso
2011-01-01
Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908
Flexible scintillator autoradiography for tumor margin inspection using 18F-FDG
NASA Astrophysics Data System (ADS)
Vyas, K. N.; Grootendorst, M.; Mertzanidou, T.; Macholl, S.; Stoyanov, D.; Arridge, S. R.; Tuch, D. S.
2018-03-01
Autoradiography potentially offers high molecular sensitivity and spatial resolution for tumor margin estimation. However, conventional autoradiography requires sectioning the sample which is destructive and labor-intensive. Here we describe a novel autoradiography technique that uses a flexible ultra-thin scintillator which conforms to the sample surface. Imaging with the flexible scintillator enables direct, high-resolution and high-sensitivity imaging of beta particle emissions from targeted radiotracers. The technique has the potential to identify positive tumor margins in fresh unsectioned samples during surgery, eliminating the processing time demands of conventional autoradiography. We demonstrate the feasibility of the flexible autoradiography approach to directly image the beta emissions from radiopharmaceuticals using lab experiments and GEANT-4 simulations to determine i) the specificity for 18F compared to 99mTc-labeled tracers ii) the sensitivity to detect signal from various depths within the tissue. We found that an image resolution of 1.5 mm was achievable with a scattering background and we estimate a minimum detectable activity concentration of 0.9 kBq/ml for 18F. We show that the flexible autoradiography approach has high potential as a technique for molecular imaging of tumor margins using 18F-FDG in a tumor xenograft mouse model imaged with a radiation-shielded EMCCD camera. Due to the advantage of conforming to the specimen, the flexible scintillator showed significantly better image quality in terms of tumor signal to whole-body background noise compared to rigid and optimally thick CaF2:Eu and BC400. The sensitivity of the technique means it is suitable for clinical translation.
Innovative model-based flow rate optimization for vanadium redox flow batteries
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2016-11-01
In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasser, M.N.; Schultze Kool, L.J.; Roos, A. de
Our goal was to assess the value of MRA for detecting stenoses in the celiac (CA) and superior mesenteric (SMA) arteries in patients suspected of having chronic mesenteric ischemia, using an optimized systolically gated 3D phase contrast technique. In an initial study in 24 patients who underwent conventional angiography of the abdominal vessels for different clinical indications, a 3D phase contrast MRA technique (3D-PCA) was evaluated and optimized to image the CAs and SMAs. Subsequently, a prospective study was performed to assess the value of systolically gated 3D-PCA in evaluation of the mesenteric arteries in 10 patients with signs andmore » symptoms of chronic mesenteric ischemia. Intraarterial digital subtraction angiography and surgical findings were used as the reference standard. In the initial study, systolic gating appeared to be essential in imaging the SMA on 3D-PCA. In 10 patients suspected of mesenteric ischemia, systolically gated 3D-PCA identified significant proximal disease in the two mesenteric vessels in 4 patients. These patients underwent successful reconstruction of their stenotic vessels. Cardiac-gated MRA may become a useful tool in selection of patients suspected of having mesenteric ischemia who may benefit from surgery. 16 refs., 6 figs., 4 tabs.« less
Experimental study on the healing process following laser welding of the cornea.
Rossi, Francesca; Pini, Roberto; Menabuoni, Luca; Mencucci, Rita; Menchini, Ugo; Ambrosini, Stefano; Vannelli, Gabriella
2005-01-01
An experimental study evaluating the application of laser welding of the cornea and the subsequent healing process is presented. The welding of corneal wounds is achieved after staining the cut walls with a solution of the chromophore indocyanine green, and irradiating them with a diode laser (810 nm) operating at low power (60 to 90 mW). The result is a localized heating of the cut, inducing controlled welding of the stromal collagen. In order to optimize this technique and to study the healing process, experimental tests, simulating cataract surgery and penetrating keratoplasty, were performed on rabbits: conventional and laser-induced suturing of corneal wounds were thus compared. A follow-up study 7 to 90 days after surgery was carried out by means of objective and histological examinations, in order to optimize the welding technique and to investigate the subsequent healing process. The analyses of the laser-welded corneas evidenced a faster and more effective restoration of the architecture of the stroma. No thermal damage of the welded stroma was detected, nor were there foreign body reactions or other inflammatory processes. Copyright 2005 Society of Photo-Optical Instrumentation Engineers.
Jenny, J Y; Boeri, C
2001-01-01
A navigation system should improve the quality of a total knee prosthesis implantation in comparison to the classical, surgeon-controlled operative technique. The authors have implanted 40 knee total prostheses with an optical infrared navigation system (Orthopilot AESCULAP, Tuttlingen--group A). The quality of implantation was studied on postoperative long leg AP and lateral X-rays, and compared to a control group of 40 computer-paired total knee prostheses o the same model (Search Prosthesis, AESCULAP, Tuttlingen) implanted with a classical, surgeon-controlled technique (group B). An optimal mechanical femorotibial angle (3 degrees valgus to 3 degrees varus) was obtained by 33 cases in group A and 31 cases in group B (p > 0.05). Better results were seen for the coronal and sagittal orientation of both tibial and femoral components in group A. Globally, 26 cases of the group A and 12 cases of the group B were implanted in an optimal manner for all studied criteria (p < 0.01). The used navigation system allows a significant improvement of the quality of implantation of a knee total prosthesis in comparison to a classical, surgeon-controlled instrumentation. Long-term outcome could be consequently improved.
NASA Astrophysics Data System (ADS)
Galanis, George; Famelis, Ioannis; Kalogeri, Christina
2014-10-01
The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.
Computer-based planning of optimal donor sites for autologous osseous grafts
NASA Astrophysics Data System (ADS)
Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin
2002-05-01
Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726
Jeon, Young-Chan; Jeong, Chang-Mo
2017-01-01
PURPOSE The purpose of this study was to compare the fit of cast gold crowns fabricated from the conventional and the digital impression technique. MATERIALS AND METHODS Artificial tooth in a master model and abutment teeth in ten patients were restored with cast gold crowns fabricated from the digital and the conventional impression technique. The forty silicone replicas were cut in three sections; each section was evaluated in nine points. The measurement was carried out by using a measuring microscope and I-Soultion. Data from the silicone replica were analyzed and all tests were performed with α-level of 0.05. RESULTS 1. The average gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. 2. In marginal and internal axial gap of cast gold crowns, no statistical differences were found between the two impression techniques. 3. The internal occlusal gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. CONCLUSION Both prostheses presented clinically acceptable results with comparing the fit. The prostheses fabricated from the digital impression technique showed more gaps, in respect of occlusal surface. PMID:28243386
Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza
2015-01-01
Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636
NASA Astrophysics Data System (ADS)
Vivek, Tiwary; Arunkumar, P.; Deshpande, A. S.; Vinayak, Malik; Kulkarni, R. M.; Asif, Angadi
2018-04-01
Conventional investment casting is one of the oldest and most economical manufacturing techniques to produce intricate and complex part geometries. However, investment casting is considered economical only if the volume of production is large. Design iterations and design optimisations in this technique proves to be very costly due to time and tooling cost for making dies for producing wax patterns. However, with the advent of Additive manufacturing technology, plastic patterns promise a very good potential to replace the wax patterns. This approach can be very useful for low volume production & lab requirements, since the cost and time required to incorporate the changes in the design is very low. This research paper discusses the steps involved for developing polymer nanocomposite filaments and checking its suitability for investment castings. The process parameters of the 3D printer machine are also optimized using the DOE technique to obtain mechanically stronger plastic patterns. The study is done to develop a framework for rapid investment casting for lab as well as industrial requirements.
Baujat, Bertrand; Thariat, Juliette; Baglin, Anne Catherine; Costes, Valérie; Testelin, Sylvie; Reyt, Emile; Janot, François
2014-05-01
Malignant tumors of the upper aerodigestive tract may be rare by their histology (sarcoma, variants of conventional squamous cell carcinomas) and/or location (sinuses, salivary glands, ear, of various histologies themselves). They represent less than 10% of head and neck neoplasms. The confirmation of their diagnosis often requires a medical expertise and sometimes biomolecular techniques complementary to classical histology and immunohistochemistry. Due to their location, their treatment often requires a specific surgical technique. Radiation therapy is indicated based on histoclinical characteristics common to other head and neck neoplasms but also incorporate grade. Further, the technique must often be adapted to take into account the proximity of organs at risk. For most histologies, chemotherapy is relatively inefficient but current molecular advances may allow to consider pharmaceutical developments in the coming years. The REFCOR, the French Network of head and neck cancers aims to organize and promote the optimal management of these rare and heterogeneous diseases, to promote research and clinical trials.
High-speed transport-of-intensity phase microscopy with an electrically tunable lens.
Zuo, Chao; Chen, Qian; Qu, Weijuan; Asundi, Anand
2013-10-07
We present a high-speed transport-of-intensity equation (TIE) quantitative phase microscopy technique, named TL-TIE, by combining an electrically tunable lens with a conventional transmission microscope. This permits the specimen at different focus position to be imaged in rapid succession, with constant magnification and no physically moving parts. The simplified image stack collection significantly reduces the acquisition time, allows for the diffraction-limited through-focus intensity stack collection at 15 frames per second, making dynamic TIE phase imaging possible. The technique is demonstrated by profiling of microlens array using optimal frequency selection scheme, and time-lapse imaging of live breast cancer cells by inversion the defocused phase optical transfer function to correct the phase blurring in traditional TIE. Experimental results illustrate its outstanding capability of the technique for quantitative phase imaging, through a simple, non-interferometric, high-speed, high-resolution, and unwrapping-free approach with prosperous applications in micro-optics, life sciences and bio-photonics.
NASA Astrophysics Data System (ADS)
Pal, Siddharth; Basak, Aniruddha; Das, Swagatam
In many manufacturing areas the detection of surface defects is one of the most important processes in quality control. Currently in order to detect small scratches on solid surfaces most of the industries working on material manufacturing rely on visual inspection primarily. In this article we propose a hybrid computational intelligence technique to automatically detect a linear scratch from a solid surface and estimate its length (in pixel unit) simultaneously. The approach is based on a swarm intelligence algorithm called Ant Colony Optimization (ACO) and image preprocessing with Wiener and Sobel filters as well as the Canny edge detector. The ACO algorithm is mostly used to compensate for the broken parts of the scratch. Our experimental results confirm that the proposed technique can be used for detecting scratches from noisy and degraded images, even when it is very difficult for conventional image processing to distinguish the scratch area from its background.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
[The future of radiology: What can we expect within the next 10 years?].
Nensa, F; Forsting, M; Wetter, A
2016-03-01
More than other medical discipline, radiology is marked by technical innovation and continuous development, as well as the optimization of the underlying physical principles. In this respect, several trends that will crucially change and develop radiology over the next decade can be observed. Through the use of ever faster computer tomography, which also shows an ever-decreasing radiation exposure, the "workhorse" of radiology will have an even greater place and displace conventional X‑ray techniques further. In addition, hybrid imaging, which is based on a combination of nuclear medicine and radiological techniques (keywords: PET/CT, PET/MRI) will become much more established and, in particular, will improve oncological imaging further, allowing increasingly individualized imaging for specific tracers and techniques of functional magnetic resonance imaging for a particular tumour. Future radiology will be strongly characterized by innovations in the software and Internet industry, which will enable new image viewing and processing methods and open up new possibilities in the context of the organization of radiological work.
Learning directed acyclic graphs from large-scale genomics data.
Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos
2017-09-20
In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geisz, John F.; France, Ryan M.; Steiner, Myles A.
Quantitative electroluminescence (EL) and luminescent coupling (LC) analysis, along with more conventional characterization techniques, are combined to completely characterize the subcell JV curves within a fourjunction (4J) inverted metamorphic solar cell (IMM). The 4J performance under arbitrary spectral conditions can be predicted from these subcell JV curves. The internal radiative efficiency (IRE) of each junction has been determined as a function of current density from the external radiative efficiency using optical modeling, but this required the accurate determination of the individual junction current densities during the EL measurement as affected by LC. These measurement and analysis techniques can be appliedmore » to any multijunction solar cell. The 4J IMM solar cell used to illustrate these techniques showed excellent junction quality as exhibited by high IRE and a one-sun AM1.5D efficiency of 36.3%. This device operates up to 1000 suns without limitations due to any of the three tunnel junctions.« less
Synchrotron x-ray imaging of acoustic cavitation bubbles induced by acoustic excitation
NASA Astrophysics Data System (ADS)
Jung, Sung Yong; Park, Han Wook; Park, Sung Ho; Lee, Sang Joon
2017-04-01
The cavitation induced by acoustic excitation has been widely applied in various biomedical applications because cavitation bubbles can enhance the exchanges of mass and energy. In order to minimize the hazardous effects of the induced cavitation, it is essential to understand the spatial distribution of cavitation bubbles. The spatial distribution of cavitation bubbles visualized by the synchrotron x-ray imaging technique is compared to that obtained with a conventional x-ray tube. Cavitation bubbles with high density in the region close to the tip of the probe are visualized using the synchrotron x-ray imaging technique, however, the spatial distribution of cavitation bubbles in the whole ultrasound field is not detected. In this study, the effects of the ultrasound power of acoustic excitation and working medium on the shape and density of the induced cavitation bubbles are examined. As a result, the synchrotron x-ray imaging technique is useful for visualizing spatial distributions of cavitation bubbles, and it could be used for optimizing the operation conditions of acoustic cavitation.
Review of vitreous islet cryopreservation
Baicu, Simona
2009-01-01
Transplantation of pancreatic islets for the treatment of diabetes mellitus is widely anticipated to eventually provide a cure once a means for preventing rejection is found without reliance upon global immunosuppression. Long-term storage of islets is crucial for the organization of transplantation, islet banking, tissue matching, organ sharing, immuno-manipulation and multiple donor transplantation. Existing methods of cryopreservation involving freezing are known to be suboptimal providing only about 50% survival. The development of techniques for ice-free cryopreservation of mammalian tissues using both natural and synthetic ice blocking molecules, and the process of vitrification (formation of a glass as opposed to crystalline ice) has been a focus of research during recent years. These approaches have established in other tissues that vitrification can markedly improve survival by circumventing ice-induced injury. Here we review some of the underlying issues that impact the vitrification approach to islet cryopreservation and describe some initial studies to apply these new technologies to the long-term storage of pancreatic islets. These studies were designed to optimize both the pre-vitrification hypothermic exposure conditions using newly developed media and to compare new techniques for ice-free cryopreservation with conventional freezing protocols. Some practical constraints and feasible resolutions are discussed. Eventually the optimized techniques will be applied to clinical allografts and xenografts or genetically-modified islets designed to overcome immune responses in the diabetic host. PMID:20046679
NASA Technical Reports Server (NTRS)
Ardalan, Sasan H.
1992-01-01
Two narrow-band radar systems are developed for high resolution target range estimation in inhomogeneous media. They are reformulations of two presently existing systems such that high resolution target range estimates may be achieved despite the use of narrow bandwidth radar pulses. A double sideband suppressed carrier radar technique originally derived in 1962, and later abandoned due to its inability to accurately measure target range in the presence of an interfering reflection, is rederived to incorporate the presence of an interfering reflection. The new derivation shows that the interfering reflection causes a period perturbation in the measured phase response. A high resolution spectral estimation technique is used to extract the period of this perturbation leading to accurate target range estimates independent of the signal-to-interference ratio. A non-linear optimal signal processing algorithm is derived for a frequency-stepped continuous wave radar system. The resolution enhancement offered by optimal signal processing of the data over the conventional Fourier Transform technique is clearly demonstrated using measured radar data. A method for modeling plane wave propagation in inhomogeneous media based on transmission line theory is derived and studied. Several simulation results including measurement of non-uniform electron plasma densities that develop near the heat tiles of a space re-entry vehicle are presented which verify the validity of the model.
Socially optimal replacement of conventional with electric vehicles for the US household fleet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong
In this study, a framework is proposed for minimizing the societal cost of replacing gas-powered household passenger cars with battery electric ones (BEVs). The societal cost consists of operational costs of heterogeneous driving patterns' cars, the government investments for charging deployment, and monetized environmental externalities. The optimization framework determines the timeframe needed for conventional vehicles to be replaced with BEVs. It also determines the BEVs driving range during the planning timeframe, as well as the density of public chargers deployed on a linear transportation network over time. We leverage datasets that represent U.S. household driving patterns, as well as themore » automobile and the energy markets, to apply the model. Results indicate that it takes 8 years for 80% of our conventional vehicle sample to be replaced with electric vehicles, under the base case scenario. The socially optimal all-electric driving range is 204 miles, with chargers placed every 172 miles on a linear corridor. All of the public chargers should be deployed at the beginning of the planning horizon to achieve greater savings over the years. Sensitivity analysis reveals that the timeframe for the socially optimal conversion of 80% of the sample varies from 6 to 12 years. The optimal decision variables are sensitive to battery pack and vehicle body cost, gasoline cost, the discount rate, and conventional vehicles' fuel economy. In conclusion, faster conventional vehicle replacement is achieved when the gasoline cost increases, electricity cost decreases, and battery packs become cheaper over the years.« less
Socially optimal replacement of conventional with electric vehicles for the US household fleet
Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong; ...
2017-04-05
In this study, a framework is proposed for minimizing the societal cost of replacing gas-powered household passenger cars with battery electric ones (BEVs). The societal cost consists of operational costs of heterogeneous driving patterns' cars, the government investments for charging deployment, and monetized environmental externalities. The optimization framework determines the timeframe needed for conventional vehicles to be replaced with BEVs. It also determines the BEVs driving range during the planning timeframe, as well as the density of public chargers deployed on a linear transportation network over time. We leverage datasets that represent U.S. household driving patterns, as well as themore » automobile and the energy markets, to apply the model. Results indicate that it takes 8 years for 80% of our conventional vehicle sample to be replaced with electric vehicles, under the base case scenario. The socially optimal all-electric driving range is 204 miles, with chargers placed every 172 miles on a linear corridor. All of the public chargers should be deployed at the beginning of the planning horizon to achieve greater savings over the years. Sensitivity analysis reveals that the timeframe for the socially optimal conversion of 80% of the sample varies from 6 to 12 years. The optimal decision variables are sensitive to battery pack and vehicle body cost, gasoline cost, the discount rate, and conventional vehicles' fuel economy. In conclusion, faster conventional vehicle replacement is achieved when the gasoline cost increases, electricity cost decreases, and battery packs become cheaper over the years.« less
Design, realization and structural testing of a compliant adaptable wing
NASA Astrophysics Data System (ADS)
Molinari, G.; Quack, M.; Arrieta, A. F.; Morari, M.; Ermanni, P.
2015-10-01
This paper presents the design, optimization, realization and testing of a novel wing morphing concept, based on distributed compliance structures, and actuated by piezoelectric elements. The adaptive wing features ribs with a selectively compliant inner structure, numerically optimized to achieve aerodynamically efficient shape changes while simultaneously withstanding aeroelastic loads. The static and dynamic aeroelastic behavior of the wing, and the effect of activating the actuators, is assessed by means of coupled 3D aerodynamic and structural simulations. To demonstrate the capabilities of the proposed morphing concept and optimization procedure, the wings of a model airplane are designed and manufactured according to the presented approach. The goal is to replace conventional ailerons, thus to achieve controllability in roll purely by morphing. The mechanical properties of the manufactured components are characterized experimentally, and used to create a refined and correlated finite element model. The overall stiffness, strength, and actuation capabilities are experimentally tested and successfully compared with the numerical prediction. To counteract the nonlinear hysteretic behavior of the piezoelectric actuators, a closed-loop controller is implemented, and its capability of accurately achieving the desired shape adaptation is evaluated experimentally. Using the correlated finite element model, the aeroelastic behavior of the manufactured wing is simulated, showing that the morphing concept can provide sufficient roll authority to allow controllability of the flight. The additional degrees of freedom offered by morphing can be also used to vary the plane lift coefficient, similarly to conventional flaps. The efficiency improvements offered by this technique are evaluated numerically, and compared to the performance of a rigid wing.
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eaton, Craig; Brahlek, Matthew; Engel-Herbert, Roman, E-mail: rue2@psu.edu
The authors report the growth of stoichiometric SrVO{sub 3} thin films on (LaAlO{sub 3}){sub 0.3}(Sr{sub 2}AlTaO{sub 6}){sub 0.7} (001) substrates using hybrid molecular beam epitaxy. This growth approach employs a conventional effusion cell to supply elemental A-site Sr and the metalorganic precursor vanadium oxytriisopropoxide (VTIP) to supply vanadium. Oxygen is supplied in its molecular form through a gas inlet. An optimal VTIP:Sr flux ratio has been identified using reflection high-energy electron-diffraction, x-ray diffraction, atomic force microscopy, and scanning transmission electron microscopy, demonstrating stoichiometric SrVO{sub 3} films with atomically flat surface morphology. Away from the optimal VTIP:Sr flux, characteristic changes inmore » the crystalline structure and surface morphology of the films were found, enabling identification of the type of nonstoichiometry. For optimal VTIP:Sr flux ratios, high quality SrVO{sub 3} thin films were obtained with smallest deviation of the lattice parameter from the ideal value and with atomically smooth surfaces, indicative of the good cation stoichiometry achieved by this growth technique.« less
SiC-VJFETs power switching devices: an improved model and parameter optimization technique
NASA Astrophysics Data System (ADS)
Ben Salah, T.; Lahbib, Y.; Morel, H.
2009-12-01
Silicon carbide junction field effect transistor (SiC-JFETs) is a mature power switch newly applied in several industrial applications. SiC-JFETs are often simulated by Spice model in order to predict their electrical behaviour. Although such a model provides sufficient accuracy for some applications, this paper shows that it presents serious shortcomings in terms of the neglect of the body diode model, among many others in circuit model topology. Simulation correction is then mandatory and a new model should be proposed. Moreover, this paper gives an enhanced model based on experimental dc and ac data. New devices are added to the conventional circuit model giving accurate static and dynamic behaviour, an effect not accounted in the Spice model. The improved model is implemented into VHDL-AMS language and steady-state dynamic and transient responses are simulated for many SiC-VJFETs samples. Very simple and reliable optimization algorithm based on the optimization of a cost function is proposed to extract the JFET model parameters. The obtained parameters are verified by comparing errors between simulations results and experimental data.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-02-24
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-01-01
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584
NASA Astrophysics Data System (ADS)
Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.
2017-12-01
With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com
2016-06-15
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
Taamalli, Amani; Arráez-Román, David; Ibañez, Elena; Zarrouk, Mokhtar; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto
2012-01-25
In the present work, a simple and rapid method for the extraction of phenolic compounds from olive leaves, using microwave-assisted extraction (MAE) technique, has been developed. The experimental variables that affect the MAE process, such as the solvent type and composition, microwave temperature, and extraction time, were optimized using a univariate method. The obtained extracts were analyzed by using high-performance liquid chromatography (HPLC) coupled to electrospray time-of-flight mass spectrometry (ESI-TOF-MS) and electrospray ion trap tandem mass spectrometry (ESI-IT-MS(2)) to prove the MAE extraction efficiency. The optimal MAE conditions were methanol:water (80:20, v/v) as extracting solvent, at a temperature equal to 80 °C for 6 min. Under these conditions, several phenolic compounds could be characterized by HPLC-ESI-MS/MS(2). As compared to the conventional method, MAE can be used as an alternative extraction method for the characterization of phenolic compounds from olive leaves due to its efficiency and speed.
Dry etching technologies for reflective multilayer
NASA Astrophysics Data System (ADS)
Iino, Yoshinori; Karyu, Makoto; Ita, Hirotsugu; Kase, Yoshihisa; Yoshimori, Tomoaki; Muto, Makoto; Nonaka, Mikio; Iwami, Munenori
2012-11-01
We have developed a highly integrated methodology for patterning Extreme Ultraviolet (EUV) mask, which has been highlighted for the lithography technique at the 14nm half-pitch generation and beyond. The EUV mask is characterized as a reflective-type mask which is completely different compared with conventional transparent-type of photo mask. And it requires not only patterning of absorber layer without damaging the underlying multi reflective layers (40 Si/Mo layers) but also etching multi reflective layers. In this case, the dry etch process has generally faced technical challenges such as the difficulties in CD control, etch damage to quartz substrate and low selectivity to the mask resist. Shibaura Mechatronics ARESTM mask etch system and its optimized etch process has already achieved the maximal etch performance at patterning two-layered absorber. And in this study, our process technologies of multi reflective layers will be evaluated by means of optimal combination of process gases and our optimized plasma produced by certain source power and bias power. When our ARES™ is used for multilayer etching, the user can choose to etch the absorber layer at the same time or etch only the multilayer.
Cerebellar and Brainstem Malformations.
Poretti, Andrea; Boltshauser, Eugen; Huisman, Thierry A G M
2016-08-01
The frequency and importance of the evaluation of the posterior fossa have increased significantly over the past 20 years owing to advances in neuroimaging. Conventional and advanced neuroimaging techniques allow detailed evaluation of the complex anatomic structures within the posterior fossa. A wide spectrum of cerebellar and brainstem malformations has been shown. Familiarity with the spectrum of cerebellar and brainstem malformations and their well-defined diagnostic criteria is crucial for optimal therapy, an accurate prognosis, and correct genetic counseling. This article discusses cerebellar and brainstem malformations, with emphasis on neuroimaging findings (including diagnostic criteria), neurologic presentation, systemic involvement, prognosis, and recurrence. Copyright © 2016 Elsevier Inc. All rights reserved.
Predictive momentum management for the Space Station
NASA Technical Reports Server (NTRS)
Hatis, P. D.
1986-01-01
Space station control moment gyro momentum management is addressed by posing a deterministic optimization problem with a performance index that includes station external torque loading, gyro control torque demand, and excursions from desired reference attitudes. It is shown that a simple analytic desired attitude solution exists for all axes with pitch prescription decoupled, but roll and yaw coupled. Continuous gyro desaturation is shown to fit neatly into the scheme. Example results for pitch axis control of the NASA power tower Space Station are shown based on predictive attitude prescription. Control effector loading is shown to be reduced by this method when compared to more conventional momentum management techniques.
Aerodynamic and structural studies of joined-wing aircraft
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Smith, Stephen; Gallman, John
1991-01-01
A method for rapidly evaluating the structural and aerodynamic characteristics of joined-wing aircraft was developed and used to study the fundamental advantages attributed to this concept. The technique involves a rapid turnaround aerodynamic analysis method for computing minimum trimmed drag combined with a simple structural optimization. A variety of joined-wing designs are compared on the basis of trimmed drag, structural weight, and, finally, trimmed drag with fixed structural weight. The range of joined-wing design parameters resulting in best cruise performance is identified. Structural weight savings and net drag reductions are predicted for certain joined-wing configurations compared with conventional cantilever-wing configurations.
Simulation of a Novel Single-column Cryogenic Air Separation Process Using LNG Cold Energy
NASA Astrophysics Data System (ADS)
Jieyu, Zheng; Yanzhong, Li; Guangpeng, Li; Biao, Si
In this paper, a novel single-column air separation process is proposed with the implementation of heat pump technique and introduction of LNG coldenergy. The proposed process is verifiedand optimized through simulation on the Aspen Hysys® platform. Simulation results reveal that thepower consumption per unit mass of liquid productis around 0.218 kWh/kg, and the total exergy efficiency of the systemis 0.575. According to the latest literatures, an energy saving of 39.1% is achieved compared with those using conventional double-column air separation units.The introduction of LNG cold energy is an effective way to increase the system efficiency.
A simple water-immersion condenser for imaging living brain slices on an inverted microscope.
Prusky, G T
1997-09-05
Due to some physical limitations of conventional condensers, inverted compound microscopes are not optimally suited for imaging living brain slices with transmitted light. Herein is described a simple device that converts an inverted microscope into an effective tool for this application by utilizing an objective as a condenser. The device is mounted on a microscope in place of the condenser, is threaded to accept a water immersion objective, and has a slot for a differential interference contrast (DIC) slider. When combined with infrared video techniques, this device allows an inverted microscope to effectively image living cells within thick brain slices in an open perfusion chamber.
Modern digital flight control system design for VTOL aircraft
NASA Technical Reports Server (NTRS)
Broussard, J. R.; Berry, P. W.; Stengel, R. F.
1979-01-01
Methods for and results from the design and evaluation of a digital flight control system (DFCS) for a CH-47B helicopter are presented. The DFCS employed proportional-integral control logic to provide rapid, precise response to automatic or manual guidance commands while following conventional or spiral-descent approach paths. It contained altitude- and velocity-command modes, and it adapted to varying flight conditions through gain scheduling. Extensive use was made of linear systems analysis techniques. The DFCS was designed, using linear-optimal estimation and control theory, and the effects of gain scheduling are assessed by examination of closed-loop eigenvalues and time responses.
[Enzymatic analysis of the quality of foodstuffs].
Kolesnov, A Iu
1997-01-01
Enzymatic analysis is an independent and separate branch of enzymology and analytical chemistry. It has become one of the most important methodologies used in food analysis. Enzymatic analysis allows the quick, reliable determination of many food ingredients. Often these contents cannot be determined by conventional methods, or if methods are available, they are determined only with limited accuracy. Today, methods of enzymatic analysis are being increasingly used in the investigation of foodstuffs. Enzymatic measurement techniques are used in industry, scientific and food inspection laboratories for quality analysis. This article describes the requirements of an optimal analytical method: specificity, sample preparation, assay performance, precision, sensitivity, time requirement, analysis cost, safety of reagents.
Beneito-Brotons, Rut; Peñarrocha-Oltra, David; Ata-Ali, Javier; Peñarrocha, María
2012-05-01
To compare a computerized intraosseous anesthesia system with the conventional oral anesthesia techniques, and analyze the latency and duration of the anesthetic effect and patient preference. A simple-blind prospective study was made between March 2007 and May 2008. Each patient was subjected to two anesthetic techniques: conventional and intraosseous using the Quicksleeper® system (DHT, Cholet, France). A split-mouth design was adopted in which each patient underwent treatment of a tooth with one of the techniques, and treatment of the homologous contralateral tooth with the other technique. The treatments consisted of restorations, endodontic procedures and simple extractions. The study series comprised 12 females and 18 males with a mean age of 36.8 years. The 30 subjects underwent a total of 60 anesthetic procedures. Intraosseous and conventional oral anesthesia caused discomfort during administration in 46.3% and 32.1% of the patients, respectively. The latency was 7.1±2.23 minutes for the conventional technique and 0.48±0.32 for intraosseous anesthesia--the difference being statistically significant. The depth of the anesthetic effect was sufficient to allow the patients to tolerate the dental treatments. The duration of the anesthetic effect in soft tissues was 199.3 minutes with the conventional technique versus only 1.6 minutes with intraosseous anesthesia--the difference between the two techniques being statistically significant. Most of the patients (69.7%) preferred intraosseous anesthesia. The described intraosseous anesthetic system is effective, with a much shorter latency than the conventional technique, sufficient duration of anesthesia to perform the required dental treatments, and with a much lesser soft tissue anesthetic effect. Most of the patients preferred intraosseous anesthesia.
Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D
2013-11-01
There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.
Beneito-Brotons, Rut; Peñarrocha-Oltra, David; Ata-Ali, Javier
2012-01-01
Objective: To compare a computerized intraosseous anesthesia system with the conventional oral anesthesia techniques, and analyze the latency and duration of the anesthetic effect and patient preference. Design: A simple-blind prospective study was made between March 2007 and May 2008. Each patient was subjected to two anesthetic techniques: conventional and intraosseous using the Quicksleeper® system (DHT, Cholet, France). A split-mouth design was adopted in which each patient underwent treatment of a tooth with one of the techniques, and treatment of the homologous contralateral tooth with the other technique. The treatments consisted of restorations, endodontic procedures and simple extractions. Results: The study series comprised 12 females and 18 males with a mean age of 36.8 years. The 30 subjects underwent a total of 60 anesthetic procedures. Intraosseous and conventional oral anesthesia caused discomfort during administration in 46.3% and 32.1% of the patients, respectively. The latency was 7.1±2.23 minutes for the conventional technique and 0.48±0.32 for intraosseous anesthesia – the difference being statistically significant. The depth of the anesthetic effect was sufficient to allow the patients to tolerate the dental treatments. The duration of the anesthetic effect in soft tissues was 199.3 minutes with the conventional technique versus only 1.6 minutes with intraosseous anesthesia – the difference between the two techniques being statistically significant. Most of the patients (69.7%) preferred intraosseous anesthesia. Conclusions: The described intraosseous anesthetic system is effective, with a much shorter latency than the conventional technique, sufficient duration of anesthesia to perform the required dental treatments, and with a much lesser soft tissue anesthetic effect. Most of the patients preferred intraosseous anesthesia. Key words:Anesthesia, intraosseous, oral anesthesia, infiltrating, mandibular block, Quicksleeper®. PMID:22143722
Erdemci, Zeynep Yalçınkaya; Cehreli, S Burçak; Tirali, R Ebru
2014-01-01
This study's purpose was to investigate microleakage and marginal discrepancies in stainless steel crowns (SSCs) placed using conventional and Hall techniques and cemented with three different luting agents. Seventy-eight human primary maxillary second molars were randomly assigned to two groups (N=39), and SSCs were applied either with the Hall or conventional technique. These two groups were further subgrouped according to the material used for crown cementation (N=13 per group). Two specimens in each group were processed for scanning electron microscopy investigation. The extent of microleakage and marginal fit was quantified in millimeters on digitally photographed sections using image analysis software. The data were compared with a two-way independent and a two-way mixed analysis of variance (P=.05). The scores in the Hall group were significantly worse than those in the conventional technique group (P<.05). In both groups, resin cement displayed the lowest extent of microleakage, followed by glass ionomer and polycarboxylate cements (P<.05). Stainless steel crowns applied using the Hall technique displayed higher microleakage scores than those applied using the conventional technique, regardless of the cementation material. When the interaction of the material and technique was assessed, resin cement presented as the best choice for minimizing microleakage in both techniques.
Investigation of the efficacy of ultrafast laser in large bowel excision
NASA Astrophysics Data System (ADS)
Mohanan, Syam Mohan P. C.; Beck, Rainer J.; Góra, Wojciech S.; Perry, Sarah L.; Shires, Mike; Jayne, David; Hand, Duncan P.; Shephard, Jonathan D.
2017-02-01
Local resection of early stage tumors in the large bowel via colonoscopy has been a widely accepted surgical modality for colon neoplasm treatment. The conventional electrocautery techniques used for the resection of neoplasia in the mucosal or submucosal layer of colon tissue has been shown to create obvious thermal necrosis to adjacent healthy tissues and lacks accuracy in resection. Ultrafast picosecond (ps) laser ablation using a wavelength of 1030 or 515 nm is a promising surgical tool to overcome the limitations seen with conventional surgical techniques. The purpose of this initial study is to analyze the depth of ablation or the extent of coagulation deployed by the laser as a function of pulse energy and fluence in an ex-vivo porcine model. Precise control of the depth of tissue removal is of paramount importance for bowel surgery where bowel perforation can lead to morbidity or mortality. Thus we investigate the regimes that are optimal for tissue resection and coagulation through plasma mediated ablation of healthy colon tissue. The ablated tissue samples were analyzed by standard histologic methods and a three dimensional optical profilometer technique. We demonstrate that ultrafast laser resection of colonic tissue can minimize the region of collateral thermal damage (<50 μm) with a controlled ablation depth. This surgical modality allows potentially easier removal of early stage lesions and has the capability to provide more control to the surgeon in comparison with a mechanical or electrocautery device.
Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network
He, Jun; Yang, Shixi; Gan, Chunbiao
2017-01-01
Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods. PMID:28677638
Split-spectrum processing technique for SNR enhancement of ultrasonic guided wave.
Pedram, Seyed Kamran; Fateri, Sina; Gan, Lu; Haig, Alex; Thornicroft, Keith
2018-02-01
Ultrasonic guided wave (UGW) systems are broadly used in several branches of industry where the structural integrity is of concern. In those systems, signal interpretation can often be challenging due to the multi-modal and dispersive propagation of UGWs. This results in degradation of the signals in terms of signal-to-noise ratio (SNR) and spatial resolution. This paper employs the split-spectrum processing (SSP) technique in order to enhance the SNR and spatial resolution of UGW signals using the optimized filter bank parameters in real time scenario for pipe inspection. SSP technique has already been developed for other applications such as conventional ultrasonic testing for SNR enhancement. In this work, an investigation is provided to clarify the sensitivity of SSP performance to the filter bank parameter values for UGWs such as processing bandwidth, filter bandwidth, filter separation and a number of filters. As a result, the optimum values are estimated to significantly improve the SNR and spatial resolution of UGWs. The proposed method is synthetically and experimentally compared with conventional approaches employing different SSP recombination algorithms. The Polarity Thresholding (PT) and PT with Minimization (PTM) algorithms were found to be the best recombination algorithms. They substantially improved the SNR up to 36.9dB and 38.9dB respectively. The outcome of the work presented in this paper paves the way to enhance the reliability of UGW inspections. Copyright © 2017 Elsevier B.V. All rights reserved.
Vielreicher, M.; Schürmann, S.; Detsch, R.; Schmidt, M. A.; Buttgereit, A.; Boccaccini, A.; Friedrich, O.
2013-01-01
This review focuses on modern nonlinear optical microscopy (NLOM) methods that are increasingly being used in the field of tissue engineering (TE) to image tissue non-invasively and without labelling in depths unreached by conventional microscopy techniques. With NLOM techniques, biomaterial matrices, cultured cells and their produced extracellular matrix may be visualized with high resolution. After introducing classical imaging methodologies such as µCT, MRI, optical coherence tomography, electron microscopy and conventional microscopy two-photon fluorescence (2-PF) and second harmonic generation (SHG) imaging are described in detail (principle, power, limitations) together with their most widely used TE applications. Besides our own cell encapsulation, cell printing and collagen scaffolding systems and their NLOM imaging the most current research articles will be reviewed. These cover imaging of autofluorescence and fluorescence-labelled tissue and biomaterial structures, SHG-based quantitative morphometry of collagen I and other proteins, imaging of vascularization and online monitoring techniques in TE. Finally, some insight is given into state-of-the-art three-photon-based imaging methods (e.g. coherent anti-Stokes Raman scattering, third harmonic generation). This review provides an overview of the powerful and constantly evolving field of multiphoton microscopy, which is a powerful and indispensable tool for the development of artificial tissues in regenerative medicine and which is likely to gain importance also as a means for general diagnostic medical imaging. PMID:23864499
Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network.
He, Jun; Yang, Shixi; Gan, Chunbiao
2017-07-04
Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods.
Electromagnetic navigation system for CT-guided biopsy of small lesions.
Appelbaum, Liat; Sosna, Jacob; Nissenbaum, Yizhak; Benshtein, Alexander; Goldberg, S Nahum
2011-05-01
The purpose of this study was to evaluate an electromagnetic navigation system for CT-guided biopsy of small lesions. Standardized CT anthropomorphic phantoms were biopsied by two attending radiologists. CT scans of the phantom and surface electromagnetic fiducial markers were imported into the memory of the 3D electromagnetic navigation system. Each radiologist assessed the accuracy of biopsy using electromagnetic navigation alone by targeting sets of nine lesions (size range, 8-14 mm; skin to target distance, 5.7-12.8 cm) under eight different conditions of detector field strength and orientation (n = 117). As a control, each radiologist also biopsied two sets of five targets using conventional CT-guided technique. Biopsy accuracy, number of needle passes, procedure time, and radiation dose were compared. Under optimal conditions (phantom perpendicular to the electromagnetic receiver at highest possible field strength), phantom accuracy to the center of the lesion was 2.6 ± 1.1 mm. This translated into hitting 84.4% (38/45) of targets in a single pass (1.1 ± 0.4 CT confirmations), which was significantly fewer than the 3.6 ± 1.3 CT checks required for conventional technique (p < 0.001). The mean targeting time was 38.8 ± 18.2 seconds per lesion. Including procedural planning (∼5.5 minutes) and final CT confirmation of placement (∼3.5 minutes), the full electromagnetic tracking procedure required significantly less time (551.6 ± 87.4 seconds [∼9 minutes]) than conventional CT (833.3 ± 283.8 seconds [∼14 minutes]) for successful targeting (p < 0.001). Less favorable conditions, including nonperpendicular relation between the axis of the machine and weaker field strength, resulted in statistically significant lower accuracy (3.7 ± 1 mm, p < 0.001). Nevertheless, first-pass biopsy accuracy was 58.3% (21/36) and second-pass (35/36) accuracy was 97.2%. Lesions farther from the skin than 20-25 cm were out of range for successful electromagnetic tracking. Virtual electromagnetic tracking appears to have high accuracy in needle placement, potentially reducing time and radiation exposure compared with those of conventional CT techniques in the biopsy of small lesions.
Ultrasound assisted synthesis of iron doped TiO2 catalyst.
Ambati, Rohini; Gogate, Parag R
2018-01-01
The present work deals with synthesis of Fe (III) doped TiO 2 catalyst using the ultrasound assisted approach and conventional sol-gel approach with an objective of establishing the process intensification benefits. Effect of operating parameters such as Fe doping, type of solvent, solvent to precursor ratio and initial temperature has been investigated to get the best catalyst with minimum particle size. Comparison of the catalysts obtained using the conventional and ultrasound assisted approach under the optimized conditions has been performed using the characterization techniques like DLS, XRD, BET, SEM, EDS, TEM, FTIR and UV-Vis band gap analysis. It was established that catalyst synthesized by ultrasound assisted approach under optimized conditions of 0.4mol% doping, irradiation time of 60min, propan-2-ol as the solvent with the solvent to precursor ratio as 10 and initial temperature of 30°C was the best one with minimum particle size as 99nm and surface area as 49.41m 2 /g. SEM analysis, XRD analysis as well as the TEM analysis also confirmed the superiority of the catalyst obtained using ultrasound assisted approach as compared to the conventional approach. EDS analysis also confirmed the presence of 4.05mol% of Fe element in the sample of 0.4mol% iron doped TiO 2 . UV-Vis band gap results showed the reduction in band gap from 3.2eV to 2.9eV. Photocatalytic experiments performed to check the activity also confirmed that ultrasonically synthesized Fe doped TiO 2 catalyst resulted in a higher degradation of Acid Blue 80 as 38% while the conventionally synthesized catalyst resulted in a degradation of 31.1%. Overall, the work has clearly established importance of ultrasound in giving better catalyst characteristics as well as activity for degradation of the Acid Blue 80 dye. Copyright © 2017 Elsevier B.V. All rights reserved.
Social Image Tag Ranking by Two-View Learning
NASA Astrophysics Data System (ADS)
Zhuang, Jinfeng; Hoi, Steven C. H.
Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.
Yang, Yue; Cheow, Wean Sin; Hadinoto, Kunn
2012-09-15
Lipid-polymer hybrid nanoparticles have emerged as promising nanoscale carriers of therapeutics as they combine the attractive characteristics of liposomes and polymers. Herein we develop dry powder inhaler (DPI) formulation of hybrid nanoparticles composed of poly(lactic-co-glycolic acid) and soybean lecithin as the polymer and lipid constituents, respectively. The hybrid nanoparticles are transformed into inhalable microscale nanocomposite structures by a novel technique based on electrostatically-driven adsorption of nanoparticles onto polysaccharide carrier particles, which eliminates the drawbacks of conventional techniques based on controlled drying (e.g. nanoparticle-specific formulation, low yield). First, we engineer polysaccharide carrier particles made up of chitosan cross-linked with tripolyphosphate and dextran sulphate to exhibit the desired aerosolization characteristics and physical robustness. Second, we investigate the effects of nanoparticle to carrier mass ratio and salt inclusion on the adsorption efficiency, in terms of the nanoparticle loading and yield, from which the optimal formulation is determined. Desorption of the nanoparticles from the carrier particles in phosphate buffer saline is also examined. Lastly, we characterize aerosolization efficiency of the nanocomposite product in vitro, where the emitted dose and respirable fraction are found to be comparable to the values of conventional DPI formulations. Copyright © 2012 Elsevier B.V. All rights reserved.
ATOMIC RESOLUTION CRYO ELECTRON MICROSCOPY OF MACROMOLECULAR COMPLEXES
ZHOU, Z. HONG
2013-01-01
Single-particle cryo electron microscopy (cryoEM) is a technique for determining three-dimensional (3D) structures from projection images of molecular complexes preserved in their “native,” noncrystalline state. Recently, atomic or near-atomic resolution structures of several viruses and protein assemblies have been determined by single-particle cryoEM, allowing ab initio atomic model building by following the amino acid side chains or nucleic acid bases identifiable in their cryoEM density maps. In particular, these cryoEM structures have revealed extended arms contributing to molecular interactions that are otherwise not resolved by the conventional structural method of X-ray crystallography at similar resolutions. High-resolution cryoEM requires careful consideration of a number of factors, including proper sample preparation to ensure structural homogeneity, optimal configuration of electron imaging conditions to record high-resolution cryoEM images, accurate determination of image parameters to correct image distortions, efficient refinement and computation to reconstruct a 3D density map, and finally appropriate choice of modeling tools to construct atomic models for functional interpretation. This progress illustrates the power of cryoEM and ushers it into the arsenal of structural biology, alongside conventional techniques of X-ray crystallography and NMR, as a major tool (and sometimes the preferred one) for the studies of molecular interactions in supramolecular assemblies or machines. PMID:21501817
Cast and 3D printed ion exchange membranes for monolithic microbial fuel cell fabrication
NASA Astrophysics Data System (ADS)
Philamore, Hemma; Rossiter, Jonathan; Walters, Peter; Winfield, Jonathan; Ieropoulos, Ioannis
2015-09-01
We present novel solutions to a key challenge in microbial fuel cell (MFC) technology; greater power density through increased relative surface area of the ion exchange membrane that separates the anode and cathode electrodes. The first use of a 3D printed polymer and a cast latex membrane are compared to a conventionally used cation exchange membrane. These new techniques significantly expand the geometric versatility available to ion exchange membranes in MFCs, which may be instrumental in answering challenges in the design of MFCs including miniaturisation, cost and ease of fabrication. Under electrical load conditions selected for optimal power transfer, peak power production (mean 10 batch feeds) was 11.39 μW (CEM), 10.51 μW (latex) and 0.92 μW (Tangoplus). Change in conductivity and pH of anolyte were correlated with MFC power production. Digital and environmental scanning electron microscopy show structural changes to and biological precipitation on membrane materials following long term use in an MFC. The cost of the novel membranes was lower than the conventional CEM. The efficacy of two novel membranes for ion exchange indicates that further characterisation of these materials and their fabrication techniques, shows great potential to significantly increase the range and type of MFCs that can be produced.
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
Prasad, Rahul; Al-Keraif, Abdulaziz Abdullah; Kathuria, Nidhi; Gandhi, P V; Bhide, S V
2014-02-01
The purpose of this study was to determine whether the ringless casting and accelerated wax-elimination techniques can be combined to offer a cost-effective, clinically acceptable, and time-saving alternative for fabricating single unit castings in fixed prosthodontics. Sixty standardized wax copings were fabricated on a type IV stone replica of a stainless steel die. The wax patterns were divided into four groups. The first group was cast using the ringless investment technique and conventional wax-elimination method; the second group was cast using the ringless investment technique and accelerated wax-elimination method; the third group was cast using the conventional metal ring investment technique and conventional wax-elimination method; the fourth group was cast using the metal ring investment technique and accelerated wax-elimination method. The vertical marginal gap was measured at four sites per specimen, using a digital optical microscope at 100× magnification. The results were analyzed using two-way ANOVA to determine statistical significance. The vertical marginal gaps of castings fabricated using the ringless technique (76.98 ± 7.59 μm) were significantly less (p < 0.05) than those castings fabricated using the conventional metal ring technique (138.44 ± 28.59 μm); however, the vertical marginal gaps of the conventional (102.63 ± 36.12 μm) and accelerated wax-elimination (112.79 ± 38.34 μm) castings were not statistically significant (p > 0.05). The ringless investment technique can produce castings with higher accuracy and can be favorably combined with the accelerated wax-elimination method as a vital alternative to the time-consuming conventional technique of casting restorations in fixed prosthodontics. © 2013 by the American College of Prosthodontists.
Yang, Jieping; Liu, Wei; Gao, Qinghong
2013-08-01
To evaluate the anesthetic effects and safety of Gow-Gates technique of inferior alveolar nerve block in impacted mandibular third molar extraction. A split-mouth study was designed. The bilateral impacted mandibular third molar of 32 participants were divided into Gow-Gates technique of inferior alveolar nerve block (Gow-Gates group) and conventional technique of inferior alveolar nerve block (conventional group) randomly with third molar extracted. The anesthetic effects and adverse events were recorded. All the participants completed the research. The anesthetic success rate was 96.9% in Gow-Gates group and 90.6% in conventional group with no statistical difference ( P= 0.317); but when comparing the anesthesia grade, Gow-Gates group had a 96.9% of grade A and B, and conventional group had a rate of 78.1% (P = 0.034). And the Gow-Gates group had a much lower withdrawn bleeding than conventional group (P = 0.025). Two groups had no hematoma. Gow-Gates technique had a reliable anesthesia effects and safety in impacted mandibular third molar extraction and could be chosen as a candidate for the conventional inferior alveolar nerve block.
Piezosurgery versus Rotatory Osteotomy in Mandibular Impacted Third Molar Extraction.
Bhati, Bharat; Kukreja, Pankaj; Kumar, Sanjeev; Rathi, Vidhi C; Singh, Kanika; Bansal, Shipra
2017-01-01
The aim of this study is to compare piezoelectric surgery versus rotatory osteotomy technique in removal of mandibular impacted third molar. Sample size of 30 patients 18 males, 12 females with a mean age of 27.43 ± 5.27. Bilateral extractions were required in all patients. All the patients were randomly allocated to two groups in one group, namely control group, surgical extraction of mandibular third molar was done using conventional rotatory osteotomy and in the other group, namely test group, extraction of lower third molar was done using Piezotome. Parameters assessed in this study were - mouth opening (interincisal opening), pain (visual analog scale VAS score), swelling, incidence of dry socket, paresthesia and duration of surgery in both groups at baseline, 1 st , 3 rd , and 7 th postoperative day. Comparing both groups pain scores with ( P < 0.05) a statistically significant difference was found between two groups. Mean surgical time was longer for piezosurgery group (51.40 ± 17.9) minutes compared to the conventional rotatory group with a mean of (37.33 ± 15.5) minutes showing a statistically significant difference ( P = 0.002). The main advantages of piezosurgery include soft tissue protection, optimal visibility in the surgical field, decreased blood loss, less vibration and noise, increased comfort for the patient, and protection of tooth structures. Therefore, the piezoelectric device was efficient in decreasing the short-term outcomes of pain and swelling although taking longer duration than conventional rotatory technique it significantly reduces the associated postoperative sequelae of third molar surgery.
Piezosurgery versus Rotatory Osteotomy in Mandibular Impacted Third Molar Extraction
Bhati, Bharat; Kukreja, Pankaj; Kumar, Sanjeev; Rathi, Vidhi C.; Singh, Kanika; Bansal, Shipra
2017-01-01
Aim: The aim of this study is to compare piezoelectric surgery versus rotatory osteotomy technique in removal of mandibular impacted third molar. Materials and Methods: Sample size of 30 patients 18 males, 12 females with a mean age of 27.43 ± 5.27. Bilateral extractions were required in all patients. All the patients were randomly allocated to two groups in one group, namely control group, surgical extraction of mandibular third molar was done using conventional rotatory osteotomy and in the other group, namely test group, extraction of lower third molar was done using Piezotome. Results: Parameters assessed in this study were – mouth opening (interincisal opening), pain (visual analog scale VAS score), swelling, incidence of dry socket, paresthesia and duration of surgery in both groups at baseline, 1st, 3rd, and 7th postoperative day. Comparing both groups pain scores with (P < 0.05) a statistically significant difference was found between two groups. Mean surgical time was longer for piezosurgery group (51.40 ± 17.9) minutes compared to the conventional rotatory group with a mean of (37.33 ± 15.5) minutes showing a statistically significant difference (P = 0.002). Conclusion: The main advantages of piezosurgery include soft tissue protection, optimal visibility in the surgical field, decreased blood loss, less vibration and noise, increased comfort for the patient, and protection of tooth structures. Therefore, the piezoelectric device was efficient in decreasing the short-term outcomes of pain and swelling although taking longer duration than conventional rotatory technique it significantly reduces the associated postoperative sequelae of third molar surgery. PMID:28713729
Lataoui, Mohammed; Seffen, Mongi; Aliakbarian, Bahar; Casazza, Alessandro Alberto; Converti, Attilio; Perego, Patrizia
2014-01-01
To optimise recovery of phenolics from Vitex agnus-castus Linn., a non-conventional high-pressure (2-24 bar) and temperature (100-180°C) extraction method was used under nitrogen atmosphere with methanol as a solvent. Optimal temperature was between 100 and 140°C, and optimal extraction time was about one half that of conventional solid/liquid extraction at room temperature. Final yields of total polyphenols, total flavonoids, o-diphenols and anthocyanins extraction were 2.0, 3.0, 2.5 and 11-fold those obtained by conventional extraction.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Inverse Regional Modeling with Adjoint-Free Technique
NASA Astrophysics Data System (ADS)
Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.
2016-02-01
The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.
Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE
Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh
2014-01-01
AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients. PMID:24876923
Planning hybrid intensity modulated radiation therapy for whole-breast irradiation.
Farace, Paolo; Zucca, Sergio; Solla, Ignazio; Fadda, Giuseppina; Durzu, Silvia; Porru, Sergio; Meleddu, Gianfranco; Deidda, Maria Assunta; Possanzini, Marco; Orrù, Sivia; Lay, Giancarlo
2012-09-01
To test tangential and not-tangential hybrid intensity modulated radiation therapy (IMRT) for whole-breast irradiation. Seventy-eight (36 right-, 42 left-) breast patients were randomly selected. Hybrid IMRT was performed by direct aperture optimization. A semiautomated method for planning hybrid IMRT was implemented using Pinnacle scripts. A plan optimization volume (POV), defined as the portion of the planning target volume covered by the open beams, was used as the target objective during inverse planning. Treatment goals were to prescribe a minimum dose of 47.5 Gy to greater than 90% of the POV and to minimize the POV and/or normal tissue receiving a dose greater than 107%. When treatment goals were not achieved by using a 4-field technique (2 conventional open plus 2 IMRT tangents), a 6-field technique was applied, adding 2 non tangential (anterior-oblique) IMRT beams. Using scripts, manual procedures were minimized (choice of optimal beam angle, setting monitor units for open tangentials, and POV definition). Treatment goals were achieved by using the 4-field technique in 61 of 78 (78%) patients. The 6-field technique was applied in the remaining 17 of 78 (22%) patients, allowing for significantly better achievement of goals, at the expense of an increase of low-dose (∼5 Gy) distribution in the contralateral tissue, heart, and lungs but with no significant increase of higher doses (∼20 Gy) in heart and lungs. The mean monitor unit contribution to IMRT beams was significantly greater (18.7% vs 9.9%) in the group of patients who required 6-field procedure. Because hybrid IMRT can be performed semiautomatically, it can be planned for a large number of patients with little impact on human or departmental resources, promoting it as the standard practice for whole-breast irradiation. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D.; Phillips, Mark H.
2015-11-15
Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumormore » target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with T{sub d} less than 10 days, there was no significant increase in tumor BED but the treatment course could be shortened without a loss in tumor BED. The improvement in the tumor mean BED was more pronounced with smaller tumors (p-value = 0.08). Conclusions: Spatiotemporal optimization of patient plans has the potential to significantly improve local tumor control (larger BED/EUD) of patients with a favorable geometry, such as smaller tumors with larger distances between the tumor target and nearby OAR. In patients with a less favorable geometry and for fast growing tumors, plans optimized using spatiotemporal optimization and conventional (spatial-only) optimization are equivalent (negligible differences in tumor BED/EUD). However, spatiotemporal optimization yields shorter treatment courses than conventional spatial-only optimization. Personalized, spatiotemporal optimization of treatment schedules can increase patient convenience and help with the efficient allocation of clinical resources. Spatiotemporal optimization can also help identify a subset of patients that might benefit from nonconventional (large dose per fraction) treatments that are ineligible for the current practice of stereotactic body radiation therapy.« less
Leonhartsberger, S; Lafferty, R M; Korneti, L
1993-09-01
Optimal conditions for both biomass formation and penicillin synthesis by a strain of Penicillium chrysogenum were determined when using a collagen-derived nitrogen source. Preliminary investigations were carried out in shaken flask cultures employing a planned experimental program termed the Graeco-Latin square technique (Auden et al., 1967). It was initially determined that up to 30% of a conventional complex nitrogen source such as cottonseed meal could be replaced by the collagen-derived nitrogen source without decreasing the productivity with respect to the penicillin yield. In the pilot scale experiments using a 30 l stirred tank type of bioreactor, higher penicillin yields were obtained when 70% of the conventional complex nitrogen source in the form of cottonseed meal was replaced by the collagen hydrolysate. Furthermore, the maximum rate of penicillin synthesis continued for over a longer period when using collagen hydrolysate as a complex nitrogen source. Penicillin synthesis rates were determined using a linear regression.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
Simulation of time-control procedures for terminal area flow management
NASA Technical Reports Server (NTRS)
Alcabin, M.; Erzberger, H.; Tobias, L.; Obrien, P. J.
1985-01-01
Simulations of a terminal area traffic-management system incorporating automated scheduling and time-control (four-dimensional) techniques conducted at NASA Ames Research Center jointly with the Federal Aviation Administration, have shown that efficient procedures can be developed for handling a mix of 4D-equipped and conventionally equipped aircraft. A crucial role in this system is played by an ATC host computer algorithm, referred to as a speed advisory, that allows controllers to maintain accurate time schedules of the conventionally equipped aircraft in the traffic mix. Results are of the most recent simulations in which two important special cases were investigated. First, the effects of a speed advisory on touchdown time scheduling are examined, when unequipped aircraft are constrained to follow fuel-optimized profiles in the near-terminal area, and rescheduling procedures are developed to handle missed approaches of 4D-equipped aircraft. Various performance measures, including controller opinion, are used to evaluate the effectiveness of the procedures.
Shokrani, Mohammad Reza; Hamidon, Mohd Nizar B.; Rokhani, Fakhrul Zaman; Shafie, Suhaidi Bin
2014-01-01
This paper presents a new type diode connected MOS transistor to improve CMOS conventional rectifier's performance in RF energy harvester systems for wireless sensor networks in which the circuits are designed in 0.18 μm TSMC CMOS technology. The proposed diode connected MOS transistor uses a new bulk connection which leads to reduction in the threshold voltage and leakage current; therefore, it contributes to increment of the rectifier's output voltage, output current, and efficiency when it is well important in the conventional CMOS rectifiers. The design technique for the rectifiers is explained and a matching network has been proposed to increase the sensitivity of the proposed rectifier. Five-stage rectifier with a matching network is proposed based on the optimization. The simulation results shows 18.2% improvement in the efficiency of the rectifier circuit and increase in sensitivity of RF energy harvester circuit. All circuits are designed in 0.18 μm TSMC CMOS technology. PMID:24782680
Shokrani, Mohammad Reza; Khoddam, Mojtaba; Hamidon, Mohd Nizar B; Kamsani, Noor Ain; Rokhani, Fakhrul Zaman; Shafie, Suhaidi Bin
2014-01-01
This paper presents a new type diode connected MOS transistor to improve CMOS conventional rectifier's performance in RF energy harvester systems for wireless sensor networks in which the circuits are designed in 0.18 μm TSMC CMOS technology. The proposed diode connected MOS transistor uses a new bulk connection which leads to reduction in the threshold voltage and leakage current; therefore, it contributes to increment of the rectifier's output voltage, output current, and efficiency when it is well important in the conventional CMOS rectifiers. The design technique for the rectifiers is explained and a matching network has been proposed to increase the sensitivity of the proposed rectifier. Five-stage rectifier with a matching network is proposed based on the optimization. The simulation results shows 18.2% improvement in the efficiency of the rectifier circuit and increase in sensitivity of RF energy harvester circuit. All circuits are designed in 0.18 μm TSMC CMOS technology.
On the Optimization of Aerospace Plane Ascent Trajectory
NASA Astrophysics Data System (ADS)
Al-Garni, Ahmed; Kassem, Ayman Hamdy
A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.
Performance analysis of a finite radon transform in OFDM system under different channel models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawood, Sameer A.; Anuar, M. S.; Fayadh, Rashid A.
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resultedmore » in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.« less
Performance analysis of a finite radon transform in OFDM system under different channel models
NASA Astrophysics Data System (ADS)
Dawood, Sameer A.; Malek, F.; Anuar, M. S.; Fayadh, Rashid A.; Abdullah, Farrah Salwani
2015-05-01
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
Multiphoton imaging with high peak power VECSELs
NASA Astrophysics Data System (ADS)
Mirkhanov, Shamil; Quarterman, Adrian H.; Swift, Samuel; Praveen, Bavishna B.; Smyth, Conor J. C.; Wilcox, Keith G.
2016-03-01
Multiphoton imaging (MMPI) has become one of thee key non-invasive light microscopy techniques. This technique allows deep tissue imaging with high resolution and less photo-damage than conventional confocal microscopy. MPI is type of laser-scanning microscopy that employs localized nonlinear excitation, so that fluorescence is excited only with is scanned focal volume. For many years, Ti: sapphire femtosecond lasers have been the leading light sources for MPI applications. However, recent developments in laser sources and new types of fluorophores indicate that longer wavelength excitation could be a good alternative for these applications. Mode-locked VECSEELs have the potential to be low cost, compact light sources for MPI systems, with the additional advantage of broad wavelength coverage through use of different semiconductor material systems. Here, we use a femtosecond fibber laser to investigate the effect average power and repetition rate has on MPI image quality, to allow us to optimize our mode-locked VVECSELs for MPI.
NASA Astrophysics Data System (ADS)
Gómez-Galán, J. A.; Sánchez-Rodríguez, T.; Sánchez-Raya, M.; Martel, I.; López-Martín, A.; Carvajal, R. G.; Ramírez-Angulo, J.
2014-06-01
This paper evaluates the design of front-end electronics in modern technologies to be used in a new generation of heavy ion detectors—HYDE (FAIR, Germany)—proposing novel architectures to achieve high gain in a low voltage environment. As conventional topologies of operational amplifiers in modern CMOS processes show limitations in terms of gain, novel approaches must be raised. The work addresses the design using transistors with channel length of no more than double the feature size and a supply voltage as low as 1.2 V. A front-end system has been fabricated in a 90 nm process including gain boosting techniques based on regulated cascode circuits. The analog channel has been optimized to match a detector capacitance of 5 pF and exhibits a good performance in terms of gain, speed, linearity and power consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, V.; Fannin, K.F.; Biljetina, R.
1986-07-01
The Institute of Gas Technology (IGT) conducted a comprehensive laboratory-scale research program to develop and optimize the anaerobic digestion process for producing methane from water hyacinth and sludge blends. This study focused on digester design and operating techniques, which gave improved methane yields and production rates over those observed using conventional digesters. The final digester concept and the operating experience was utilized to design and operate a large-scale experimentla test unit (ETU) at Walt Disney World, Florida. This paper describes the novel digester design, operating techniques, and the results obtained in the laboratory. The paper also discusses a kinetic modelmore » which predicts methane yield, methane production rate, and digester effluent solids as a function of retention time. This model was successfully utilized to predict the performance of the ETU. 15 refs., 6 figs., 6 tabs.« less
Radiation therapy for breast cancer: Literature review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaji, Karunakaran, E-mail: karthik.balaji85@gmail.com; School of Advanced Sciences, VIT University, Vellore; Subramanian, Balaji
Concave shape with variable size target volume makes treatment planning for the breast/chest wall a challenge. Conventional techniques used for the breast/chest wall cancer treatment provided better sparing of organs at risk (OARs), with poor conformity and uniformity to the target volume. Advanced technologies such as intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) improve the target coverage at the cost of higher low dose volumes to OARs. Novel hybrid techniques present promising results in breast/chest wall irradiation in terms of target coverage as well as OARs sparing. Several published data compared these technologies for the benefit ofmore » the breast/chest wall with or without nodal volumes. The aim of this article is to review relevant data and identify the scope for further research in developing optimal treatment plan for breast/chest wall cancer treatment.« less
Antenna coupled photonic wire lasers
Kao, Tsung-Kao; Cai, Xiaowei; Lee, Alan W. M.; ...
2015-06-22
Slope efficiency (SE) is an important performance metric for lasers. In conventional semiconductor lasers, SE can be optimized by careful designs of the facet (or the modulation for DFB lasers) dimension and surface. However, photonic wire lasers intrinsically suffer low SE due to their deep sub-wavelength emitting facets. Inspired by microwave engineering techniques, we show a novel method to extract power from wire lasers using monolithically integrated antennas. These integrated antennas significantly increase the effective radiation area, and consequently enhance the power extraction efficiency. When applied to wire lasers at THz frequency, we achieved the highest single-side slope efficiency (~450more » mW/A) in pulsed mode for DFB lasers at 4 THz and a ~4x increase in output power at 3 THz compared with a similar structure without antennas. This work demonstrates the versatility of incorporating microwave engineering techniques into laser designs, enabling significant performance enhancements.« less
Zhao, Yongxi; Kong, Yu; Wang, Bo; Wu, Yayan; Wu, Hong
2007-03-30
A simple and rapid micellar electrokinetic chromatography (MEKC) method with UV detection was developed for the simultaneous separation and determination of all-trans- and 13-cis-retinoic acids in rabbit serum by on-line sweeping concentration technique. The serum sample was simply deproteinized and centrifuged. Various parameters affecting sample enrichment and separation were systematically investigated. Under optimal conditions, the analytes could be well separated within 17min, and the relative standard deviations (RSD) of migration times and peak areas were less than 3.4%. Compared with the conventional MEKC injection method, the 18- and 19-fold improvements in sensitivity were achieved, respectively. The proposed method has been successfully applied to the determination of all-trans- and 13-cis-retinoic acids in serum samples from rabbits and could be feasible for the further pharmacokinetics study of all-trans-retinoic acid.
Predicting ozone profile shape from satellite UV spectra
NASA Astrophysics Data System (ADS)
Xu, Jian; Loyola, Diego; Romahn, Fabian; Doicu, Adrian
2017-04-01
Identifying ozone profile shape is a critical yet challenging job for the accurate reconstruction of vertical distributions of atmospheric ozone that is relevant to climate change and air quality. Motivated by the need to develop an approach to reliably and efficiently estimate vertical information of ozone and inspired by the success of machine learning techniques, this work proposes a new algorithm for deriving ozone profile shapes from ultraviolet (UV) absorption spectra that are recorded by satellite instruments, e.g. GOME series and the future Sentinel missions. The proposed algorithm formulates this particular inverse problem in a classification framework rather than a conventional inversion one and places an emphasis on effectively characterizing various profile shapes based on machine learning techniques. Furthermore, a comparison of the ozone profiles from real GOME-2 data estimated by our algorithm and the classical retrieval algorithm (Optimal Estimation Method) is performed.
Nanoscale surface characterization using laser interference microscopy
NASA Astrophysics Data System (ADS)
Ignatyev, Pavel S.; Skrynnik, Andrey A.; Melnik, Yury A.
2018-03-01
Nanoscale surface characterization is one of the most significant parts of modern materials development and application. The modern microscopes are expensive and complicated tools, and its use for industrial tasks is limited due to laborious sample preparation, measurement procedures, and low operation speed. The laser modulation interference microscopy method (MIM) for real-time quantitative and qualitative analysis of glass, metals, ceramics, and various coatings has a spatial resolution of 0.1 nm for vertical and up to 100 nm for lateral. It is proposed as an alternative to traditional scanning electron microscopy (SEM) and atomic force microscopy (AFM) methods. It is demonstrated that in the cases of roughness metrology for super smooth (Ra >1 nm) surfaces the application of a laser interference microscopy techniques is more optimal than conventional SEM and AFM. The comparison of semiconductor test structure for lateral dimensions measurements obtained with SEM and AFM and white light interferometer also demonstrates the advantages of MIM technique.
Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2016-01-01
Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
NASA Astrophysics Data System (ADS)
Mutrikah, N.; Winarno, H.; Amalia, T.; Djakaria, M.
2017-08-01
The objective of this study was to compare conventional and conformal techniques of external beam radiotherapy (EBRT) in terms of the dose distribution, tumor response, and side effects in the treatment of locally advanced cervical cancer patients. A retrospective cohort study was conducted on cervical cancer patients who underwent EBRT before brachytherapy in the Radiotherapy Department of Cipto Mangunkusumo Hospital. The prescribed dose distribution, tumor response, and acute side effects of EBRT using conventional and conformal techniques were investigated. In total, 51 patients who underwent EBRT using conventional techniques (25 cases using Cobalt-60 and 26 cases using a linear accelerator (LINAC)) and 29 patients who underwent EBRT using conformal techniques were included in the study. The distribution of the prescribed dose in the target had an impact on the patient’s final response to EBRT. The complete response rate of patients to conformal techniques was significantly greater (58%) than that of patients to conventional techniques (42%). No severe acute local side effects were seen in any of the patients (Radiation Therapy Oncology Group (RTOG) grades 3-4). The distribution of the dose and volume to the gastrointestinal tract affected the proportion of mild acute side effects (RTOG grades 1-2). The urinary bladder was significantly greater using conventional techniques (Cobalt-60/LINAC) than using conformal techniques at 72% and 78% compared to 28% and 22%, respectively. The use of conformal techniques in pelvic radiation therapy is suggested in radiotherapy centers with CT simulators and 3D Radiotherapy Treatment Planning Systems (RTPSs) to decrease some uncertainties in radiotherapy planning. The use of AP/PA pelvic radiation techniques with Cobalt-60 should be limited in body thicknesses equal to or less than 18 cm. When using conformal techniques, delineation should be applied in the small bowel, as it is considered a critical organ according to RTOG consensus guidelines.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Pandurangan, B.; Hariharan, A.; Yen, C.-F.; Cheeseman, B. A.
2011-02-01
To respond to the advent of more lethal threats, recently designed aluminum-armor-based military-vehicle systems have resorted to an increasing use of higher strength aluminum alloys (with superior ballistic resistance against armor piercing (AP) threats and with high vehicle-light weighing potential). Unfortunately, these alloys are not very amenable to conventional fusion-based welding technologies and in-order to obtain high-quality welds, solid-state joining technologies such as Friction stir welding (FSW) have to be employed. However, since FSW is a relatively new and fairly complex joining technology, its introduction into advanced military vehicle structures is not straight forward and entails a comprehensive multi-step approach. One such (three-step) approach is developed in the present work. Within the first step, experimental and computational techniques are utilized to determine the optimal tool design and the optimal FSW process parameters which result in maximal productivity of the joining process and the highest quality of the weld. Within the second step, techniques are developed for the identification and qualification of the optimal weld joint designs in different sections of a prototypical military vehicle structure. In the third step, problems associated with the fabrication of a sub-scale military vehicle test structure and the blast survivability of the structure are assessed. The results obtained and the lessons learned are used to judge the potential of the current approach in shortening the development time and in enhancing reliability and blast survivability of military vehicle structures.
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Zeković, Zoran; Vladić, Jelena; Vidović, Senka; Adamović, Dušan; Pavlić, Branimir
2016-10-01
Microwave-assisted extraction (MAE) of polyphenols from coriander seeds was optimized by simultaneous maximization of total phenolic (TP) and total flavonoid (TF) yields, as well as maximized antioxidant activity determined by 1,1-diphenyl-2-picrylhydrazyl and reducing power assays. Box-Behnken experimental design with response surface methodology (RSM) was used for optimization of MAE. Extraction time (X1 , 15-35 min), ethanol concentration (X2 , 50-90% w/w) and irradiation power (X3 , 400-800 W) were investigated as independent variables. Experimentally obtained values of investigated responses were fitted to a second-order polynomial model, and multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions. The optimal MAE conditions for simultaneous maximization of polyphenol yield and increased antioxidant activity were an extraction time of 19 min, an ethanol concentration of 63% and an irradiation power of 570 W, while predicted values of TP, TF, IC50 and EC50 at optimal MAE conditions were 311.23 mg gallic acid equivalent per 100 g dry weight (DW), 213.66 mg catechin equivalent per 100 g DW, 0.0315 mg mL(-1) and 0.1311 mg mL(-1) respectively. RSM was successfully used for multi-response optimization of coriander seed polyphenols. Comparison of optimized MAE with conventional extraction techniques confirmed that MAE provides significantly higher polyphenol yields and extracts with increased antioxidant activity. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Epstein, F H; Mugler, J P; Brookeman, J R
1994-02-01
A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Nanoparticle-based photodynamic therapy on non-melanoma skin cancer
NASA Astrophysics Data System (ADS)
Fanjul-Vélez, F.; Arce-Diego, J. L.
2018-02-01
There are several advantages of Photodynamic Therapy (PDT) for nonmelanoma skin cancer treatment compared to conventional treatment techniques such as surgery, radiotherapy or chemotherapy. Among these advantages its noninvasive nature, the use of non ionizing radiation and its high selectivity can be mentioned. Despite all these advantages, the therapeutic efficiency of the current clinical protocol is not complete in all the patients and depends on the type of pathology. An adequate dosimetry is needed in order to personalize the protocol. There are strategies that try to overcome the current PDT shortcomings, such as the improvement of the photosensitizer accumulation in the target tissue, optical radiation distribution optimization or photochemical reactions maximization. These strategies can be further complemented by the use of nanostructures with conventional PDT. Customized dosimetry for nanoparticle-based PDT requires models in order to adjust parameters of different nature to get an optimal tumor removal. In this work, a predictive model of nanoparticle-based PDT is proposed and analyzed. Dosimetry in nanoparticle-based PDT is going to be influenced by photosensitizer-nanoparticle distribution in the malignant tissue, its influence in the optical radiation distribution and the subsequent photochemical reactions. Nanoparticles are considered as photosensitizer carriers on several types of non-melanoma skin cancer. Shielding effects are taken into account. The results allow to compare the estimated treatment outcome with and without nanoparticles.
NASA Astrophysics Data System (ADS)
Dao, Thanh Hai
2018-01-01
Network coding techniques are seen as the new dimension to improve the network performances thanks to the capability of utilizing network resources more efficiently. Indeed, the application of network coding to the realm of failure recovery in optical networks has been marking a major departure from traditional protection schemes as it could potentially achieve both rapid recovery and capacity improvement, challenging the prevailing wisdom of trading capacity efficiency for speed recovery and vice versa. In this context, the maturing of all-optical XOR technologies appears as a good match to the necessity of a more efficient protection in transparent optical networks. In addressing this opportunity, we propose to use a practical all-optical XOR network coding to leverage the conventional 1 + 1 optical path protection in transparent WDM optical networks. The network coding-assisted protection solution combines protection flows of two demands sharing the same destination node in supportive conditions, paving the way for reducing the backup capacity. A novel mathematical model taking into account the operation of new protection scheme for optimal network designs is formulated as the integer linear programming. Numerical results based on extensive simulations on realistic topologies, COST239 and NSFNET networks, are presented to highlight the benefits of our proposal compared to the conventional approach in terms of wavelength resources efficiency and network throughput.
MR Imaging of Knee Arthroplasty Implants
Fritz, Jan; Lurie, Brett
2015-01-01
Primary total knee arthroplasty is a highly effective treatment that relieves pain and improves joint function in a large percentage of patients. Despite an initially satisfactory surgical outcome, pain, dysfunction, and implant failure can occur over time. Identifying the etiology of complications is vital for appropriate management and proper timing of revision. Due to the increasing number of knee arthroplasties performed and decreasing patient age at implantation, there is a demand for accurate diagnosis to determine appropriate treatment of symptomatic joints following knee arthroplasty, and for monitoring of patients at risk. Magnetic resonance (MR) imaging allows for comprehensive imaging evaluation of the tissues surrounding knee arthroplasty implants with metallic components, including the polyethylene components. Optimized conventional and advanced pulse sequences can result in substantial metallic artifact reduction and afford improved visualization of bone, implant-tissue interfaces, and periprosthetic soft tissue for the diagnosis of arthroplasty-related complications. In this review article, we discuss strategies for MR imaging around knee arthroplasty implants and illustrate the imaging appearances of common modes of failure, including aseptic loosening, polyethylene wear–induced synovitis and osteolysis, periprosthetic joint infections, fracture, patellar clunk syndrome, recurrent hemarthrosis, arthrofibrosis, component malalignment, extensor mechanism injury, and instability. A systematic approach is provided for evaluation of MR imaging of knee implants. MR imaging with optimized conventional pulse sequences and advanced metal artifact reduction techniques can contribute important information for diagnosis, prognosis, risk stratification, and surgical planning. ©RSNA, 2015 PMID:26295591
A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media
NASA Astrophysics Data System (ADS)
Zhou, T.; Hu, W.; Ning, J.
2017-12-01
Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.
Malla, Javed Ahmed; Chakravarti, Soumendu; Gupta, Vikas; Chander, Vishal; Sharma, Gaurav Kumar; Qureshi, Salauddin; Mishra, Adhiraj; Gupta, Vivek Kumar; Nandi, Sukdeb
2018-02-20
Bovine herpesvirus-1 (BHV-1) is a major viral pathogen affecting bovines leading to various clinical manifestations and causes significant economic impediment in modern livestock production system. Rapid, accurate and sensitive detection of BHV-1 infection at frozen semen stations or at dairy herds remains a priority for control of BHV-1 spread to susceptible population. Polymerase Spiral Reaction (PSR), a novel addition in the gamut of isothermal techniques, has been successfully implemented in initial optimization for detection of BHV-1 genomic DNA and further validated in clinical samples. The developed PSR assay has been validated for detection of BHV-1 from bovine semen (n=99), a major source of transmission of BHV-1 from breeding bulls to susceptible dams in artificial insemination programs. The technique has also been used for screening of BHV-1 DNA from suspected aborted fetal tissues (n=25). The developed PSR technique is 100 fold more sensitive than conventional PCR and comparable to real-time PCR. The PSR technique has been successful in detecting 13 samples positive for BHV-1 DNA in bovine semen, 4 samples more than conventional PCR. The aborted fetal tissues were negative for presence of BHV-1 DNA. The presence of BHV-1 in bovine semen samples raises a pertinent concern for extensively screening of semen from breeding bulls before been used for artificial insemination process. PSR has all the attributes for becoming a method of choice for rapid, accurate and sensitive detection of BHV-1 DNA at frozen semen stations or at dairy herds in resource constrained settings. Copyright © 2017 Elsevier B.V. All rights reserved.
Retention of denture bases fabricated by three different processing techniques – An in vivo study
Chalapathi Kumar, V. H.; Surapaneni, Hemchand; Ravikiran, V.; Chandra, B. Sarat; Balusu, Srilatha; Reddy, V. Naveen
2016-01-01
Aim: Distortion due to Polymerization shrinkage compromises the retention. To evaluate the amount of retention of denture bases fabricated by conventional, anchorized, and injection molding polymerization techniques. Materials and Methods: Ten completely edentulous patients were selected, impressions were made, and master cast obtained was duplicated to fabricate denture bases by three polymerization techniques. Loop was attached to the finished denture bases to estimate the force required to dislodge them by retention apparatus. Readings were subjected to nonparametric Friedman two-way analysis of variance followed by Bonferroni correction methods and Wilcoxon matched-pairs signed-ranks test. Results: Denture bases fabricated by injection molding (3740 g), anchorized techniques (2913 g) recorded greater retention values than conventional technique (2468 g). Significant difference was seen between these techniques. Conclusions: Denture bases obtained by injection molding polymerization technique exhibited maximum retention, followed by anchorized technique, and least retention was seen in conventional molding technique. PMID:27382542
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2013-01-01
Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.
Dwell time algorithm based on the optimization theory for magnetorheological finishing
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen
2010-10-01
Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.
NASA Astrophysics Data System (ADS)
Esmaeili, Mostafa; Motagh, Mahdi
2016-07-01
Time-series analysis of Synthetic Aperture Radar (SAR) data using the two techniques of Small BAseline Subset (SBAS) and Persistent Scatterer Interferometric SAR (PSInSAR) extends the capability of conventional interferometry technique for deformation monitoring and mitigating many of its limitations. Using dual/quad polarized data provides us with an additional source of information to improve further the capability of InSAR time-series analysis. In this paper we use dual-polarized data and combine the Amplitude Dispersion Index (ADI) optimization of pixels with phase stability criterion for PSInSAR analysis. ADI optimization is performed by using Simulated Annealing algorithm to increase the number of Persistent Scatterer Candidate (PSC). The phase stability of PSCs is then measured using their temporal coherence to select the final sets of pixels for deformation analysis. We evaluate the method for a dataset comprising of 17 dual polarization SAR data (HH/VV) acquired by TerraSAR-X data from July 2013 to January 2014 over a subsidence area in Iran and compare the effectiveness of the method for both agricultural and urban regions. The results reveal that using optimum scattering mechanism decreases the ADI values in urban and non-urban regions. As compared to single-pol data the use of optimized polarization increases initially the number of PSCs by about three times and improves the final PS density by about 50%, in particular in regions with high rate of deformation which suffer from losing phase stability over the time. The classification of PS pixels based on their optimum scattering mechanism revealed that the dominant scattering mechanism of the PS pixels in the urban area is double-bounce while for the non-urban regions (ground surfaces and farmlands) it is mostly single-bounce mechanism.
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Least-squares model-based halftoning
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Neuhoff, David L.
1992-08-01
A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.
Side effects and complications of intraosseous anesthesia and conventional oral anesthesia.
Peñarrocha-Oltra, David; Ata-Ali, Javier; Oltra-Moscardó, María-José; Peñarrocha-Diago, María; Peñarrocha, Miguel
2012-05-01
To analyze the side effects and complications following intraosseous anesthesia (IA), comparing them with those of the conventional oral anesthesia techniques. A simple-blind, prospective clinical study was carried out. Each patient underwent two anesthetic techniques: conventional (local infiltration and locoregional anesthetic block) and intraosseous, for respective dental operations. In order to allow comparison of IA versus conventional anesthesia, the two operations were similar and affected the same two teeth in opposite quadrants. Heart rate was recorded in all cases before injection of the anesthetic solution and again 30 seconds after injection. The complications observed after anesthetic administration were recorded. A total of 200 oral anesthetic procedures were carried out in 100 patients. Both IA and conventional anesthesia resulted in a significant increase in heart rate, though the increase was greater with the latter technique. Incidents were infrequent with either anesthetic technique, with no significant differences between them. Regarding the complications, there were significant differences in pain at the injection site, with more intense pain in the case of IA (x2=3.532, p=0.030, Φ2=0.02), while the limitation of oral aperture was more pronounced with conventional anesthesia (x2=5.128, p<0.05, Φ2=0.014). Post-anesthetic biting showed no significant differences (x2=4.082, p=0.121, Φ2=0.009). Both anesthetic techniques significantly increased heart rate, and IA caused comparatively more pain at the injection site, while limited oral aperture was more frequent with conventional anesthesia. Post-anesthetic biting showed no significant differences between the two techniques.
Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal
2015-01-01
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191
Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal
2015-08-13
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.
2017-01-01
Summary The present study was done to optimize the power ultrasound processing for maximizing diastase activity of and minimizing hydroxymethylfurfural (HMF) content in honey using response surface methodology. Experimental design with treatment time (1-15 min), amplitude (20-100%) and volume (40-80 mL) as independent variables under controlled temperature conditions was studied and it was concluded that treatment time of 8 min, amplitude of 60% and volume of 60 mL give optimal diastase activity and HMF content, i.e. 32.07 Schade units and 30.14 mg/kg, respectively. Further thermal profile analyses were done with initial heating temperatures of 65, 75, 85 and 95 ºC until temperature of honey reached up to 65 ºC followed by holding time of 25 min at 65 ºC, and the results were compared with thermal profile of honey treated with optimized power ultrasound. The quality characteristics like moisture, pH, diastase activity, HMF content, colour parameters and total colour difference were least affected by optimized power ultrasound treatment. Microbiological analysis also showed lower counts of aerobic mesophilic bacteria and in ultrasonically treated honey than in thermally processed honey samples complete destruction of coliforms, yeasts and moulds. Thus, it was concluded that power ultrasound under suggested operating conditions is an alternative nonthermal processing technique for honey. PMID:29540991