Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.
Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank
2017-12-01
Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.
Acceleration techniques in the univariate Lipschitz global optimization
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
[Modified Misgav-Labach at a tertiary hospital].
Martínez Ceccopieri, David Alejandro; Barrios Prieto, Ernesto; Martínez Ríos, David
2012-08-01
According to several studies from around the globe, the modified Misgav Ladach technique simplifies the surgical procedure for cesarean section, reduces operation time, costs, and complications, and optimizes obstetric and perinatal outcomes. Compare obstetric outcomes between patients operated on using traditional cesarean section technique and those operated on using modified Misgav Ladach technique. The study included 49 patients operated on using traditional cesarean section technique and 47 patients operated on using modified Misgav Ladach technique to compare the outcomes in both surgical techniques. The modified Misgav Ladach technique was associated with more benefits than those of the traditional technique: less surgical bleeding, less operation time, less analgesic total doses, less rescue analgesic doses and less need of more than one analgesic drug. The modified Misgav Ladach surgical technique was associated with better obstetric results than those of the traditional surgical technique; this concurs with the results reported by other national and international studies.
NASA Astrophysics Data System (ADS)
Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu
2016-09-01
In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.
Mirapeix, J; Cobo, A; González, D A; López-Higuera, J M
2007-02-19
A new plasma spectroscopy analysis technique based on the generation of synthetic spectra by means of optimization processes is presented in this paper. The technique has been developed for its application in arc-welding quality assurance. The new approach has been checked through several experimental tests, yielding results in reasonably good agreement with the ones offered by the traditional spectroscopic analysis technique.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Madu, C N; Quint, D J; Normolle, D P; Marsh, R B; Wang, E Y; Pierce, L J
2001-11-01
To delineate with computed tomography (CT) the anatomic regions containing the supraclavicular (SCV) and infraclavicular (IFV) nodal groups, to define the course of the brachial plexus, to estimate the actual radiation dose received by these regions in a series of patients treated in the traditional manner, and to compare these doses to those received with an optimized dosimetric technique. Twenty patients underwent contrast material-enhanced CT for the purpose of radiation therapy planning. CT scans were used to study the location of the SCV and IFV nodal regions by using outlining of readily identifiable anatomic structures that define the nodal groups. The brachial plexus was also outlined by using similar methods. Radiation therapy doses to the SCV and IFV were then estimated by using traditional dose calculations and optimized planning. A repeated measures analysis of covariance was used to compare the SCV and IFV depths and to compare the doses achieved with the traditional and optimized methods. Coverage by the 90% isodose surface was significantly decreased with traditional planning versus conformal planning as the depth to the SCV nodes increased (P < .001). Significantly decreased coverage by using the 90% isodose surface was demonstrated for traditional planning versus conformal planning with increasing IFV depth (P = .015). A linear correlation was found between brachial plexus depth and SCV depth up to 7 cm. Conformal optimized planning provided improved dosimetric coverage compared with standard techniques.
Liposomal Bupivacaine Injection Technique in Total Knee Arthroplasty.
Meneghini, R Michael; Bagsby, Deren; Ireland, Philip H; Ziemba-Davis, Mary; Lovro, Luke R
2017-01-01
Liposomal bupivacaine has gained popularity for pain control after total knee arthroplasty (TKA), yet its true efficacy remains unproven. We compared the efficacy of two different periarticular injection (PAI) techniques for liposomal bupivacaine with a conventional PAI control group. This retrospective cohort study compared consecutive patients undergoing TKA with a manufacturer-recommended, optimized injection technique for liposomal bupivacaine, a traditional injection technique for liposomal bupivacaine, and a conventional PAI of ropivacaine, morphine, and epinephrine. The optimized technique utilized a smaller gauge needle and more injection sites. Self-reported pain scores, rescue opioids, and side effects were compared. There were 41 patients in the liposomal bupivacaine optimized injection group, 60 in the liposomal bupivacaine traditional injection group, and 184 in the conventional PAI control group. PAI liposomal bupivacaine delivered via manufacturer-recommended technique offered no benefit over PAI ropivacaine, morphine, and epinephrine. Mean pain scores and the proportions reporting no or mild pain, time to first opioid, and amount of opioids consumed were not better with PAI liposomal bupivacaine compared with PAI ropivacaine, morphine, and epinephrine. The use of the manufacturer-recommended technique for PAI of liposomal bupivacaine does not offer benefit over a conventional, less expensive PAI during TKA. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Optimizing spacecraft design - optimization engine development : progress and plans
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim
2003-01-01
At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.
A Data-Driven Solution for Performance Improvement
NASA Technical Reports Server (NTRS)
2002-01-01
Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.
Wilson Dslash Kernel From Lattice QCD Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
NASA Technical Reports Server (NTRS)
Duong, T. A.
2004-01-01
In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.
OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms
Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...
2016-09-21
In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less
Simultaneous analysis and design
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1984-01-01
Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.
Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos
2015-01-01
Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.
Nuclear Electric Vehicle Optimization Toolset (NEVOT)
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Kos, Larry D.; Qualls, A. Lou; Greene, Sherrell
2004-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major nuclear electric propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a genetic algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be considered through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.
Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C
2002-06-01
IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.
Additive manufacturing: Toward holistic design
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.; ...
2017-03-18
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle
NASA Technical Reports Server (NTRS)
Benford, Andrew
2003-01-01
The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
Optimal Wastewater Loading under Conflicting Goals and Technology Limitations in a Riverine System.
Rafiee, Mojtaba; Lyon, Steve W; Zahraie, Banafsheh; Destouni, Georgia; Jaafarzadeh, Nemat
2017-03-01
This paper investigates a novel simulation-optimization (S-O) framework for identifying optimal treatment levels and treatment processes for multiple wastewater dischargers to rivers. A commonly used water quality simulation model, Qual2K, was linked to a Genetic Algorithm optimization model for exploration of relevant fuzzy objective-function formulations for addressing imprecision and conflicting goals of pollution control agencies and various dischargers. Results showed a dynamic flow dependence of optimal wastewater loading with good convergence to near global optimum. Explicit considerations of real-world technological limitations, which were developed here in a new S-O framework, led to better compromise solutions between conflicting goals than those identified within traditional S-O frameworks. The newly developed framework, in addition to being more technologically realistic, is also less complicated and converges on solutions more rapidly than traditional frameworks. This technique marks a significant step forward for development of holistic, riverscape-based approaches that balance the conflicting needs of the stakeholders.
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Ship Trim Optimization: Assessment of Influence of Trim on Resistance of MOERI Container Ship
Duan, Wenyang
2014-01-01
Environmental issues and rising fuel prices necessitate better energy efficiency in all sectors. Shipping industry is a stakeholder in environmental issues. Shipping industry is responsible for approximately 3% of global CO2 emissions, 14-15% of global NOX emissions, and 16% of global SOX emissions. Ship trim optimization has gained enormous momentum in recent years being an effective operational measure for better energy efficiency to reduce emissions. Ship trim optimization analysis has traditionally been done through tow-tank testing for a specific hullform. Computational techniques are increasingly popular in ship hydrodynamics applications. The purpose of this study is to present MOERI container ship (KCS) hull trim optimization by employing computational methods. KCS hull total resistances and trim and sinkage computed values, in even keel condition, are compared with experimental values and found in reasonable agreement. The agreement validates that mesh, boundary conditions, and solution techniques are correct. The same mesh, boundary conditions, and solution techniques are used to obtain resistance values in different trim conditions at Fn = 0.2274. Based on attained results, optimum trim is suggested. This research serves as foundation for employing computational techniques for ship trim optimization. PMID:24578649
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Grout, Ray W
This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rian, D.T.; Hage, A.
1994-12-31
A numerical simulator is often used as a reservoir management tool. One of its main purposes is to aid in the evaluation of number of wells, well locations and start time for wells. Traditionally, the optimization of a field development is done by a manual trial and error process. In this paper, an example of an automated technique is given. The core in the automization process is the reservoir simulator Frontline. Frontline is based on front tracking techniques, which makes it fast and accurate compared to traditional finite difference simulators. Due to its CPU-efficiency the simulator has been coupled withmore » an optimization module, which enables automatic optimization of location of wells, number of wells and start-up times. The simulator was used as an alternative method in the evaluation of waterflooding in a North Sea fractured chalk reservoir. Since Frontline, in principle, is 2D, Buckley-Leverett pseudo functions were used to represent the 3rd dimension. The area full field simulation model was run with up to 25 wells for 20 years in less than one minute of Vax 9000 CPU-time. The automatic Frontline evaluation indicated that a peripheral waterflood could double incremental recovery compared to a central pattern drive.« less
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg
2003-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori
2013-01-01
Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
Cui, Xiao-Yan; Huo, Zhong-Gang; Xin, Zhong-Hua; Tian, Xiao; Zhang, Xiao-Dong
2013-07-01
Three-dimensional (3D) copying of artificial ears and pistol printing are pushing laser three-dimensional copying technique to a new page. Laser three-dimensional scanning is a fresh field in laser application, and plays an irreplaceable part in three-dimensional copying. Its accuracy is the highest among all present copying techniques. Reproducibility degree marks the agreement of copied object with the original object on geometry, being the most important index property in laser three-dimensional copying technique. In the present paper, the error of laser three-dimensional copying was analyzed. The conclusion is that the data processing to the point cloud of laser scanning is the key technique to reduce the error and increase the reproducibility degree. The main innovation of this paper is as follows. On the basis of traditional ant colony optimization, rational ant colony optimization algorithm proposed by the author was applied to the laser three-dimensional copying as a new algorithm, and was put into practice. Compared with customary algorithm, rational ant colony optimization algorithm shows distinct advantages in data processing of laser three-dimensional copying, reducing the error and increasing the reproducibility degree of the copy.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.
Multidimensional optimal droop control for wind resources in DC microgrids
NASA Astrophysics Data System (ADS)
Bunker, Kaitlyn J.
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Is Peer Interaction Necessary for Optimal Active Learning?
ERIC Educational Resources Information Center
Linton, Debra L.; Farmer, Jan Keith; Peterson, Ernie
2014-01-01
Meta-analyses of active-learning research consistently show that active-learning techniques result in greater student performance than traditional lecture-based courses. However, some individual studies show no effect of active-learning interventions. This may be due to inexperienced implementation of active learning. To minimize the effect of…
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
NASA Astrophysics Data System (ADS)
Rauscher, Bernard J.; Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos
2017-10-01
Near-infrared array detectors, like the James Webb Space Telescope (JWST) NIRSpec’s Teledyne’s H2RGs, often provide reference pixels and a reference output. These are used to remove correlated noise. Improved reference sampling and subtraction (IRS2) is a statistical technique for using this reference information optimally in a least-squares sense. Compared with the traditional H2RG readout, IRS2 uses a different clocking pattern to interleave many more reference pixels into the data than is otherwise possible. Compared with standard reference correction techniques, IRS2 subtracts the reference pixels and reference output using a statistically optimized set of frequency-dependent weights. The benefits include somewhat lower noise variance and much less obvious correlated noise. NIRSpec’s IRS2 images are cosmetically clean, with less 1/f banding than in traditional data from the same system. This article describes the IRS2 clocking pattern and presents the equations needed to use IRS2 in systems other than NIRSpec. For NIRSpec, applying these equations is already an option in the calibration pipeline. As an aid to instrument builders, we provide our prototype IRS2 calibration software and sample JWST NIRSpec data. The same techniques are applicable to other detector systems, including those based on Teledyne’s H4RG arrays. The H4RG’s interleaved reference pixel readout mode is effectively one IRS2 pattern.
NASA Astrophysics Data System (ADS)
Milani, Gabriele; Milani, Federico
2012-12-01
The main problem in the industrial production process of thick EPM/EPDM elements is constituted by the different temperatures which undergo internal (cooler) and external regions. Indeed, while internal layers remain essentially under-vulcanized, external coating is always over-vulcanized, resulting in an overall average tensile strength insufficient to permit the utilization of the items in several applications where it is required a certain level of performance. Possible ways to improve rubber output mechanical properties include a careful calibration of exposition time and curing temperature in traditional heating or a vulcanization through innovative techniques, such as microwaves. In the present paper, a comprehensive numerical model able to give predictions on the optimized final mechanical properties of vulcanized 2D and 3D thick rubber items is presented and applied to a meaningful example of engineering interest. A detailed comparative numerical study is finally presented in order to establish pros and cons of traditional vulcanization vs microwaves curing.
USDA-ARS?s Scientific Manuscript database
Campylobacter jejuni (C. jejuni) is one of the most common causes of gastroenteritis in the world. Given the potential risks to human, animal and environmental health the development and optimization of methods to quantify this important pathogen in environmental samples is essential. Two of the mos...
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Murdoch, B E; Pitt, G; Theodoros, D G; Ward, E C
1999-01-01
The efficacy of traditional and physiological biofeedback methods for modifying abnormal speech breathing patterns was investigated in a child with persistent dysarthria following severe traumatic brain injury (TBI). An A-B-A-B single-subject experimental research design was utilized to provide the subject with two exclusive periods of therapy for speech breathing, based on traditional therapy techniques and physiological biofeedback methods, respectively. Traditional therapy techniques included establishing optimal posture for speech breathing, explanation of the movement of the respiratory muscles, and a hierarchy of non-speech and speech tasks focusing on establishing an appropriate level of sub-glottal air pressure, and improving the subject's control of inhalation and exhalation. The biofeedback phase of therapy utilized variable inductance plethysmography (or Respitrace) to provide real-time, continuous visual biofeedback of ribcage circumference during breathing. As in traditional therapy, a hierarchy of non-speech and speech tasks were devised to improve the subject's control of his respiratory pattern. Throughout the project, the subject's respiratory support for speech was assessed both instrumentally and perceptually. Instrumental assessment included kinematic and spirometric measures, and perceptual assessment included the Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech, and analysis of a speech sample. The results of the study demonstrated that real-time continuous visual biofeedback techniques for modifying speech breathing patterns were not only effective, but superior to the traditional therapy techniques for modifying abnormal speech breathing patterns in a child with persistent dysarthria following severe TBI. These results show that physiological biofeedback techniques are potentially useful clinical tools for the remediation of speech breathing impairment in the paediatric dysarthric population.
Inherent secure communications using lattice based waveform design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, Matthew Owen
2013-12-01
The wireless communications channel is innately insecure due to the broadcast nature of the electromagnetic medium. Many techniques have been developed and implemented in order to combat insecurities and ensure the privacy of transmitted messages. Traditional methods include encrypting the data via cryptographic methods, hiding the data in the noise floor as in wideband communications, or nulling the signal in the spatial direction of the adversary using array processing techniques. This work analyzes the design of signaling constellations, i.e. modulation formats, to combat eavesdroppers from correctly decoding transmitted messages. It has been shown that in certain channel models the abilitymore » of an adversary to decode the transmitted messages can be degraded by a clever signaling constellation based on lattice theory. This work attempts to optimize certain lattice parameters in order to maximize the security of the data transmission. These techniques are of interest because they are orthogonal to, and can be used in conjunction with, traditional security techniques to create a more secure communication channel.« less
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
Improved mine blast algorithm for optimal cost design of water distribution systems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon
2015-12-01
The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.
Performance comparison of optical interference cancellation system architectures.
Lu, Maddie; Chang, Matt; Deng, Yanhua; Prucnal, Paul R
2013-04-10
The performance of three optics-based interference cancellation systems are compared and contrasted with each other, and with traditional electronic techniques for interference cancellation. The comparison is based on a set of common performance metrics that we have developed for this purpose. It is shown that thorough evaluation of our optical approaches takes into account the traditional notions of depth of cancellation and dynamic range, along with notions of link loss and uniformity of cancellation. Our evaluation shows that our use of optical components affords performance that surpasses traditional electronic approaches, and that the optimal choice for an optical interference canceller requires taking into account the performance metrics discussed in this paper.
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
Niamul Islam, Naz; Hannan, M A; Mohamed, Azah; Shareef, Hussain
2016-01-01
Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation.
Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation
NASA Astrophysics Data System (ADS)
Sleesongsom, S.; Bureerat, S.
2018-03-01
This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
A Mathematical Model for Allocation of School Resources to Optimize a Selected Output.
ERIC Educational Resources Information Center
McAfee, Jackson K.
The methodology of costing an education program by identifying the resources it utilizes places all costs within the framework of staff, equipment, materials, facilities, and services. This paper suggests that this methodology is much stronger than the more traditional budgetary and cost per pupil approach. The techniques of data collection are…
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
Training Scalable Restricted Boltzmann Machines Using a Quantum Annealer
NASA Astrophysics Data System (ADS)
Kumar, V.; Bass, G.; Dulny, J., III
2016-12-01
Machine learning and the optimization involved therein is of critical importance for commercial and military applications. Due to the computational complexity of many-variable optimization, the conventional approach is to employ meta-heuristic techniques to find suboptimal solutions. Quantum Annealing (QA) hardware offers a completely novel approach with the potential to obtain significantly better solutions with large speed-ups compared to traditional computing. In this presentation, we describe our development of new machine learning algorithms tailored for QA hardware. We are training restricted Boltzmann machines (RBMs) using QA hardware on large, high-dimensional commercial datasets. Traditional optimization heuristics such as contrastive divergence and other closely related techniques are slow to converge, especially on large datasets. Recent studies have indicated that QA hardware when used as a sampler provides better training performance compared to conventional approaches. Most of these studies have been limited to moderately-sized datasets due to the hardware restrictions imposed by exisitng QA devices, which make it difficult to solve real-world problems at scale. In this work we develop novel strategies to circumvent this issue. We discuss scale-up techniques such as enhanced embedding and partitioned RBMs which allow large commercial datasets to be learned using QA hardware. We present our initial results obtained by training an RBM as an autoencoder on an image dataset. The results obtained so far indicate that the convergence rates can be improved significantly by increasing RBM network connectivity. These ideas can be readily applied to generalized Boltzmann machines and we are currently investigating this in an ongoing project.
Tang, Haijing; Wang, Siye; Zhang, Yanjun
2013-01-01
Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841
NASA Astrophysics Data System (ADS)
Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.
2018-04-01
The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.
Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen
2013-02-01
This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.
NASA Astrophysics Data System (ADS)
Rama Subbanna, S.; Suryakalavathi, M., Dr.
2017-08-01
This paper is an attempt to accomplish a performance analysis of the different control techniques on spikes reduction method applied on the medium frequency transformer based DC spot welding system. Spike reduction is an important factor to be considered while spot welding systems are concerned. During normal RSWS operation welding transformer’s magnetic core can become saturated due to the unbalanced resistances of both transformer secondary windings and different characteristics of output rectifier diodes, which causes current spikes and over-current protection switch-off of the entire system. The current control technique is a piecewise linear control technique that is inspired from the DC-DC converter control algorithms to register a novel spike reduction method in the MFDC spot welding applications. Two controllers that were used for the spike reduction portion of the overall applications involve the traditional PI controller and Optimized PI controller. Care is taken such that the current control technique would maintain a reduced spikes in the primary current of the transformer while it reduces the Total Harmonic Distortion. The performance parameter that is involved in the spikes reduction technique is the THD, Percentage of current spike reduction for both techniques. Matlab/SimulinkTM based simulation is carried out for the MFDC RSWS with KW and results are tabulated for the PI and Optimized PI controllers and a tradeoff analysis is carried out.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Neoliberal Optimism: Applying Market Techniques to Global Health.
Mei, Yuyang
2017-01-01
Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
Data-driven in computational plasticity
NASA Astrophysics Data System (ADS)
Ibáñez, R.; Abisset-Chavanne, E.; Cueto, E.; Chinesta, F.
2018-05-01
Computational mechanics is taking an enormous importance in industry nowadays. On one hand, numerical simulations can be seen as a tool that allows the industry to perform fewer experiments, reducing costs. On the other hand, the physical processes that are intended to be simulated are becoming more complex, requiring new constitutive relationships to capture such behaviors. Therefore, when a new material is intended to be classified, an open question still remains: which constitutive equation should be calibrated. In the present work, the use of model order reduction techniques are exploited to identify the plastic behavior of a material, opening an alternative route with respect to traditional calibration methods. Indeed, the main objective is to provide a plastic yield function such that the mismatch between experiments and simulations is minimized. Therefore, once the experimental results just like the parameterization of the plastic yield function are provided, finding the optimal plastic yield function can be seen either as a traditional optimization or interpolation problem. It is important to highlight that the dimensionality of the problem is equal to the number of dimensions related to the parameterization of the yield function. Thus, the use of sparse interpolation techniques seems almost compulsory.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia
2014-11-01
This study characterizes the composition of plastic frames and printed circuit boards from end-of-life mobile phones. This knowledge may help define an optimal processing strategy for using these items as potential raw materials. Correct handling of such a waste is essential for its further "sustainable" recovery, especially to maximize the extraction of base, rare and precious metals, minimizing the environmental impact of the entire process chain. A combination of electronic and chemical imaging techniques was thus examined, applied and critically evaluated in order to optimize the processing, through the identification and the topological assessment of the materials of interest and their quantitative distribution. To reach this goal, end-of-life mobile phone derived wastes have been systematically characterized adopting both "traditional" (e.g. scanning electronic microscopy combined with microanalysis and Raman spectroscopy) and innovative (e.g. hyperspectral imaging in short wave infrared field) techniques, with reference to frames and printed circuit boards. Results showed as the combination of both the approaches (i.e. traditional and classical) could dramatically improve recycling strategies set up, as well as final products recovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
Optimized random phase only holograms.
Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto
2018-02-15
We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
NASA Astrophysics Data System (ADS)
Kenway, Gaetan K. W.
This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
Real Time Optimal Control of Supercapacitor Operation for Frequency Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish
2016-07-01
Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.
Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR
Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
Universal field matching in craniospinal irradiation by a background-dose gradient-optimized method.
Traneus, Erik; Bizzocchi, Nicola; Fellin, Francesco; Rombi, Barbara; Farace, Paolo
2018-01-01
The gradient-optimized methods are overcoming the traditional feathering methods to plan field junctions in craniospinal irradiation. In this note, a new gradient-optimized technique, based on the use of a background dose, is described. Treatment planning was performed by RayStation (RaySearch Laboratories, Stockholm, Sweden) on the CT scans of a pediatric patient. Both proton (by pencil beam scanning) and photon (by volumetric modulated arc therapy) treatments were planned with three isocenters. An 'in silico' ideal background dose was created first to cover the upper-spinal target and to produce a perfect dose gradient along the upper and lower junction regions. Using it as background, the cranial and the lower-spinal beams were planned by inverse optimization to obtain dose coverage of their relevant targets and of the junction volumes. Finally, the upper-spinal beam was inversely planned after removal of the background dose and with the previously optimized beams switched on. In both proton and photon plans, the optimized cranial and the lower-spinal beams produced a perfect linear gradient in the junction regions, complementary to that produced by the optimized upper-spinal beam. The final dose distributions showed a homogeneous coverage of the targets. Our simple technique allowed to obtain high-quality gradients in the junction region. Such technique universally works for photons as well as protons and could be applicable to the TPSs that allow to manage a background dose. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Pozzi, P; Wilding, D; Soloviev, O; Verstraete, H; Bliek, L; Vdovin, G; Verhaegen, M
2017-01-23
The quality of fluorescence microscopy images is often impaired by the presence of sample induced optical aberrations. Adaptive optical elements such as deformable mirrors or spatial light modulators can be used to correct aberrations. However, previously reported techniques either require special sample preparation, or time consuming optimization procedures for the correction of static aberrations. This paper reports a technique for optical sectioning fluorescence microscopy capable of correcting dynamic aberrations in any fluorescent sample during the acquisition. This is achieved by implementing adaptive optics in a non conventional confocal microscopy setup, with multiple programmable confocal apertures, in which out of focus light can be separately detected, and used to optimize the correction performance with a sampling frequency an order of magnitude faster than the imaging rate of the system. The paper reports results comparing the correction performances to traditional image optimization algorithms, and demonstrates how the system can compensate for dynamic changes in the aberrations, such as those introduced during a focal stack acquisition though a thick sample.
NASA Astrophysics Data System (ADS)
Faria, Paula
2010-09-01
For the past few years, the potential of transcranial direct current stimulation (tDCS) for the treatment of several pathologies has been investigated. Knowledge of the current density distribution is an important factor in optimizing such applications of tDCS. For this goal, we used the finite element method to solve the Laplace equation in a spherical head model in order to investigate the three dimensional distribution of the current density and the variation of its intensity with depth using different electrodes montages: the traditional one with two sponge electrodes and new electrode montages: with sponge and EEG electrodes and with EEG electrodes varying the numbers of electrodes. The simulation results confirm the effectiveness of the mixed system which may allow the use of tDCS and EEG recording concomitantly and may help to optimize this neuronal stimulation technique. The numerical results were used in a promising application of tDCS in epilepsy.
Artificial Intelligence based technique for BTS placement
NASA Astrophysics Data System (ADS)
Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.
2013-12-01
The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.
Experimental validation of structural optimization methods
NASA Technical Reports Server (NTRS)
Adelman, Howard M.
1992-01-01
The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
The new era of cardiac surgery: hybrid therapy for cardiovascular disease.
Solenkova, Natalia V; Umakanthan, Ramanan; Leacche, Marzia; Zhao, David X; Byrne, John G
2010-11-01
Surgical therapy for cardiovascular disease carries excellent long-term outcomes but it is relatively invasive. With the development of new devices and techniques, modern cardiovascular surgery is trending toward less invasive approaches, especially for patients at high risk for traditional open heart surgery. A hybrid strategy combines traditional surgical treatments performed in the operating room with treatments traditionally available only in the catheterization laboratory with the goal of offering patients the best available therapy for any set of cardiovascular diseases. Examples of hybrid procedures include hybrid coronary artery bypass grafting, hybrid valve surgery and percutaneous coronary intervention, hybrid endocardial and epicardial atrial fibrillation procedures, and hybrid coronary artery bypass grafting/carotid artery stenting. This multidisciplinary approach requires strong collaboration between cardiac surgeons, vascular surgeons, and interventional cardiologists to obtain optimal patient outcomes.
Van Dun, Bram; Wouters, Jan; Moonen, Marc
2009-07-01
Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
NASA Technical Reports Server (NTRS)
Schredder, J. M.
1988-01-01
A comparative analysis was performed, using both the Geometrical Theory of Diffraction (GTD) and traditional pathlength error analysis techniques, for predicting RF antenna gain performance and pointing corrections. The NASA/JPL 70 meter antenna with its shaped surface was analyzed for gravity loading over the range of elevation angles. Also analyzed were the effects of lateral and axial displacements of the subreflector. Significant differences were noted between the predictions of the two methods, in the effect of subreflector displacements, and in the optimal subreflector positions to focus a gravity-deformed main reflector. The results are of relevance to future design procedure.
NASA Technical Reports Server (NTRS)
Addona, Brad; Eddleman, David
2015-01-01
A developmental Main Oxidizer Valve (MOV) was designed by NASA-MSFC using additive manufacturing processes. The MOV is a pneumatically actuated poppet valve to control the flow of liquid oxygen to an engine's injector. A compression spring is used to return the valve to the closed state when pneumatic pressure is removed from the valve. The valve internal parts are cylindrical in shape, which lends itself to traditional lathe and milling operations. However, the valve body represents a complicated shape and contains the majority of the mass of the valve. Additive manufacturing techniques were used to produce a part that optimized mass and allowed for design features not practical with traditional machining processes.
Jacobsen, G; Elli, F; Horgan, S
2004-08-01
Minimally invasive surgical techniques have revolutionized the field of surgery. Telesurgical manipulators (robots) and new information technologies strive to improve upon currently available minimally invasive techniques and create new possibilities. A retrospective review of all robotic cases at a single academic medical center from August 2000 until November 2002 was conducted. A comprehensive literature evaluation on robotic surgical technology was also performed. Robotic technology is safely and effectively being applied at our institution. Robotic and information technologies have improved upon minimally invasive surgical techniques and created new opportunities not attainable in open surgery. Robotic technology offers many benefits over traditional minimal access techniques and has been proven safe and effective. Further research is needed to better define the optimal application of this technology. Credentialing and educational requirements also need to be delineated.
De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Palumbo, Davide; De Finis, Rosa; Galietti, Umberto
2017-10-11
Friction Stir Welding (FSW) is a solid-state welding process, based on frictional and stirring phenomena, that offers many advantages with respect to the traditional welding methods. However, several parameters can affect the quality of the produced joints. In this work, an experimental approach has been used for studying and optimizing the FSW process, applied on 5754-H111 aluminum plates. In particular, the thermal behavior of the material during the process has been investigated and two thermal indexes, the maximum temperature and the heating rate of the material, correlated to the frictional power input, were investigated for different process parameters (the travel and rotation tool speeds) configurations. Moreover, other techniques (micrographs, macrographs and destructive tensile tests) were carried out for supporting in a quantitative way the analysis of the quality of welded joints. The potential of thermographic technique has been demonstrated both for monitoring the FSW process and for predicting the quality of joints in terms of tensile strength.
150-nm DR contact holes die-to-database inspection
NASA Astrophysics Data System (ADS)
Kuo, Shen C.; Wu, Clare; Eran, Yair; Staud, Wolfgang; Hemar, Shirley; Lindman, Ofer
2000-07-01
Using a failure analysis-driven yield enhancements concept, based on an optimization of the mask manufacturing process and UV reticle inspection is studied and shown to improve the contact layer quality. This is achieved by relating various manufacturing processes to very fine tuned contact defect detection. In this way, selecting an optimized manufacturing process with fine-tuned inspection setup is achieved in a controlled manner. This paper presents a study, performed on a specially designed test reticle, which simulates production contact layers of design rule 250nm, 180nm and 150nm. This paper focuses on the use of advanced UV reticle inspection techniques as part of the process optimization cycle. Current inspection equipment uses traditional and insufficient methods of small contact-hole inspection and review.
Optimal pulse design for communication-oriented slow-light pulse detection.
Stenner, Michael D; Neifeld, Mark A
2008-01-21
We present techniques for designing pulses for linear slow-light delay systems which are optimal in the sense that they maximize the signal-to-noise ratio (SNR) and signal-to-noise-plus-interference ratio (SNIR) of the detected pulse energy. Given a communication model in which input pulses are created in a finite temporal window and output pulse energy in measured in a temporally-offset output window, the SNIR-optimal pulses achieve typical improvements of 10 dB compared to traditional pulse shapes for a given output window offset. Alternatively, for fixed SNR or SNIR, window offset (detection delay) can be increased by 0.3 times the window width. This approach also invites a communication-based model for delay and signal fidelity.
Weibl, Peter; Klingler, Hans-Christoph; Klatte, Tobias; Remzi, Mesut
2010-01-01
Laparo-Endoscopic Single-Site surgery (LESS) for kidney diseases is quickly evolving and has a tendency to expand the urological armory of surgical techniques. However, we should not be overwhelmed by the surgical skills only and weight it against the basic clinical and oncological principles when compared to standard laparoscopy. The initial goal is to define the ideal candidates and ideal centers for LESS in the future. Modification of basic instruments in laparoscopy presumably cannot result in better functional and oncological outcomes, especially when the optimal working space is limited with the same arm movements. Single port surgery is considered minimally invasive laparoscopy; on the other hand, when using additional ports, it is no more single port, but hybrid traditional laparoscopy. Whether LESS is a superior or equally technique compared to traditional laparoscopy has to be proven by future prospective randomized trials. PMID:20169054
Generation of structural topologies using efficient technique based on sorted compliances
NASA Astrophysics Data System (ADS)
Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2018-01-01
Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.
Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
2013-01-01
We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less
[An object-oriented intelligent engineering design approach for lake pollution control].
Zou, Rui; Zhou, Jing; Liu, Yong; Zhu, Xiang; Zhao, Lei; Yang, Ping-Jian; Guo, Huai-Cheng
2013-03-01
Regarding the shortage and deficiency of traditional lake pollution control engineering techniques, a new lake pollution control engineering approach was proposed in this study, based on object-oriented intelligent design (OOID) from the perspective of intelligence. It can provide a new methodology and framework for effectively controlling lake pollution and improving water quality. The differences between the traditional engineering techniques and the OOID approach were compared. The key points for OOID were described as object perspective, cause and effect foundation, set points into surface, and temporal and spatial optimization. The blue algae control in lake was taken as an example in this study. The effect of algae control and water quality improvement were analyzed in details from the perspective of object-oriented intelligent design based on two engineering techniques (vertical hydrodynamic mixer and pumping algaecide recharge). The modeling results showed that the traditional engineering design paradigm cannot provide scientific and effective guidance for engineering design and decision-making regarding lake pollution. Intelligent design approach is based on the object perspective and quantitative causal analysis in this case. This approach identified that the efficiency of mixers was much higher than pumps in achieving the goal of low to moderate water quality improvement. However, when the objective of water quality exceeded a certain value (such as the control objective of peak Chla concentration exceeded 100 microg x L(-1) in this experimental water), the mixer cannot achieve this goal. The pump technique can achieve the goal but with higher cost. The efficiency of combining the two techniques was higher than using one of the two techniques alone. Moreover, the quantitative scale control of the two engineering techniques has a significant impact on the actual project benefits and costs.
Development of Mid-infrared GeSn Light Emitting Diodes on a Silicon Substrate
2015-04-22
Materials, Heterostrucuture Semiconductor, Light Emitting Devices, Molecular Beam Epitaxy 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...LED) structure. Optimization of traditional and hetero- P-i-N structures designed and grown on Ge-buffer Si (001) wafers using molecular beam epitaxy ...designed structures were grown on Ge-buffer Si (001) wafers using molecular beam epitaxy (MBE) with the low-temperature growth technique. (The Ge-buffer
Meta-heuristic algorithms as tools for hydrological science
NASA Astrophysics Data System (ADS)
Yoo, Do Guen; Kim, Joong Hoon
2014-12-01
In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.
Pain in children--are we accomplishing the optimal pain treatment?
Lundeberg, Stefan
2015-01-01
Morphine, paracetamol and local anesthetics have for a long time been the foremost used analgesics in the pediatric patient by tradition but not always enough effective and associated with side effects. The purpose with this article is to propose alternative approaches in pain management, not always supported up by substantial scientific work but from a combination of science and clinical experience in the field. The scientific literature has been reviewed in parts regarding different aspects of pain assessment and analgesics used for treatment of diverse pain conditions with focus on procedural and acute pain. Clinical experience has been added to form the suggested improvements in accomplishing an improved pain management in pediatric patients. The aim with pain management in children should be a tailored analgesic medication with an individual acceptable pain level and optimal degree of mobilization with as little side effects as possible. Simple techniques of pain control are as effective as and complex techniques in pediatrics but the technique used is not of the highest importance in achieving a good pain management. Increased interest and improved education of the doctors prescribing analgesics is important in accomplishing a better pain management. The optimal treatment with analgesics is depending on the analysis of pain origin and analgesics used should be adjusted thereafter. A multimodal treatment regime is advocated for optimal analgesic effect. © 2014 John Wiley & Sons Ltd.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Development of materials for the rapid manufacture of die cast tooling
NASA Astrophysics Data System (ADS)
Hardro, Peter Jason
The focus of this research is to develop a material composition that can be processed by rapid prototyping (RP) in order to produce tooling for the die casting process. Where these rapidly produced tools will be superior to traditional tooling production methods by offering one or more of the following advantages: reduced tooling cost, shortened tooling creation time, reduced man-hours for tool creation, increased tool life, and shortened die casting cycle time. By utilizing RP's additive build process and vast material selection, there was a prospect that die cast tooling may be produced quicker and with superior material properties. To this end, the material properties that influence die life and cycle time were determined, and a list of materials that fulfill these "optimal" properties were highlighted. Physical testing was conducted in order to grade the processability of each of the material systems and to optimize the manufacturing process for the downselected material system. Sample specimens were produced and microscopy techniques were utilized to determine a number of physical properties of the material system. Additionally, a benchmark geometry was selected and die casting dies were produced from traditional tool materials (H13 steel) and techniques (machining) and from the newly developed materials and RP techniques (selective laser sintering (SLS) and laser engineered net shaping (LENS)). Once the tools were created, a die cast alloy was selected and a preset number of parts were shot into each tool. During tool creation, the manufacturing time and cost was closely monitored and an economic model was developed to compare traditional tooling to RP tooling. This model allows one to determine, in the early design stages, when it is advantageous to implement RP tooling and when traditional tooling would be best. The results of the physical testing and economic analysis has shown that RP tooling is able to achieve a number of the research objectives, namely, reduce tooling cost, shorten tooling creation time, and reduce the man-hours needed for tool creation. Though identifying the appropriate time to use RP tooling appears to be the most important aspect in achieving successful implementation.
NASA Astrophysics Data System (ADS)
Nietubyć, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; Smedley, John; Kosińska, Anna
2018-05-01
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the lead photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. The quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.
Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes
NASA Technical Reports Server (NTRS)
Movriplis, Dimitri J.
1998-01-01
Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez-Parcerisa, D; Carabe-Fernandez, A
2014-06-01
Purpose. Intensity-modulated proton therapy is usually implemented with multi-field optimization of pencil-beam scanning (PBS) proton fields. However, at the view of the experience with photon-IMRT, proton facilities equipped with double-scattering (DS) delivery and multi-leaf collimation (MLC) could produce highly conformal dose distributions (and possibly eliminate the need for patient-specific compensators) with a clever use of their MLC field shaping, provided that an optimal inverse TPS is developed. Methods. A prototype TPS was developed in MATLAB. The dose calculation process was based on a fluence-dose algorithm on an adaptive divergent grid. A database of dose kernels was precalculated in order tomore » allow for fast variations of the field range and modulation during optimization. The inverse planning process was based on the adaptive simulated annealing approach, with direct aperture optimization of the MLC leaves. A dosimetry study was performed on a phantom formed by three concentrical semicylinders separated by 5 mm, of which the inner-most and outer-most were regarded as organs at risk (OARs), and the middle one as the PTV. We chose a concave target (which is not treatable with conventional DS fields) to show the potential of our technique. The optimizer was configured to minimize the mean dose to the OARs while keeping a good coverage of the target. Results. The plan produced by the prototype TPS achieved a conformity index of 1.34, with the mean doses to the OARs below 78% of the prescribed dose. This Result is hardly achievable with traditional conformal DS technique with compensators, and it compares to what can be obtained with PBS. Conclusion. It is certainly feasible to produce IMPT fields with MLC passive scattering fields. With a fully developed treatment planning system, the produced plans can be superior to traditional DS plans in terms of plan conformity and dose to organs at risk.« less
Dodd, C; Watts, R G
2012-07-01
Prophylactic infusion of clotting factor concentrates is a developing standard of care for individuals with haemophilia. The ideal schedule and techniques of prophylactic infusions remain incompletely defined. Our aim was to determine the optimal techniques and schedules for factor prophylaxis in paediatric patients. A retrospective electronic medical record review of all children treated with prophylactic factor infusions in a single Haemophilia Treatment Center was conducted. Comparison of traditional vs. Canadian dosing regimens and primary vs. secondary prophylaxis was made. Failure of prophylaxis was defined as the first serious bleed. A total of 58 children were identified for review. Five cases were excluded (four due to high titre inhibitors and one due to repeated non-compliance), thus there were 53 total cases: 46 with severe haemophilia, 2 with moderate haemophilia, 5 with mild haemophilia, 44 with haemophilia A and 9 with haemophilia B; 32 Traditional dosing and 21 Canadian dosing regimens. Patients on primary prophylaxis had a decreased failure rate (25%) compared to children treated with secondary prophylaxis (67%) regardless of technique of prophylaxis. When compared to a 'Traditional' factor prophylaxis schedule, the 'Canadian' tailored prophylaxis protocol was comparable with the exception of a decreased use of implanted venous devices in the 'Canadian' group. Ongoing bleeding (primarily joint bleeds) occurs with all prophylactic regimens. The lowest incidence of treatment failure was noted in children who began primary prophylaxis at a young age and before initial joint bleeds. Primary prophylaxis is superior to secondary prophylaxis regardless of dosing regimen. Traditional and Canadian dosing regimens were equivalent in outcome when measured over several years of follow-up. © 2012 Blackwell Publishing Ltd.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
A survey of techniques for architecting and managing GPU register file
Mittal, Sparsh
2016-04-07
To support their massively-multithreaded architecture, GPUs use very large register file (RF) which has a capacity higher than even L1 and L2 caches. In total contrast, traditional CPUs use tiny RF and much larger caches to optimize latency. Due to these differences, along with the crucial impact of RF in determining GPU performance, novel and intelligent techniques are required for managing GPU RF. In this paper, we survey the techniques for designing and managing GPU RF. We discuss techniques related to performance, energy and reliability aspects of RF. To emphasize the similarities and differences between the techniques, we classify themmore » along several parameters. Lastly, the aim of this paper is to synthesize the state-of-art developments in RF management and also stimulate further research in this area.« less
A survey of techniques for architecting and managing GPU register file
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh
To support their massively-multithreaded architecture, GPUs use very large register file (RF) which has a capacity higher than even L1 and L2 caches. In total contrast, traditional CPUs use tiny RF and much larger caches to optimize latency. Due to these differences, along with the crucial impact of RF in determining GPU performance, novel and intelligent techniques are required for managing GPU RF. In this paper, we survey the techniques for designing and managing GPU RF. We discuss techniques related to performance, energy and reliability aspects of RF. To emphasize the similarities and differences between the techniques, we classify themmore » along several parameters. Lastly, the aim of this paper is to synthesize the state-of-art developments in RF management and also stimulate further research in this area.« less
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Optimization of a tensegrity wing for biomimetic applications
NASA Astrophysics Data System (ADS)
Moored, Keith W., III; Taylor, Stuart A.; Bart-Smith, Hilary
2006-03-01
Current attempts to build fast, efficient, and maneuverable underwater vehicles have looked to nature for inspiration. However, they have all been based on traditional propulsive techniques, i.e. rotary motors. In the current study a promising and potentially revolutionary approach is taken that overcomes the limitations of these traditional methods-morphing structure concepts with integrated actuation and sensing. Inspiration for this work comes from the manta ray (Manta birostris) and other batoid fish. These creatures are highly maneuverable but are also able to cruise at high speeds over long distances. In this paper, the structural foundation for the biomimetic morphing wing is a tensegrity structure. A preliminary procedure is presented for developing morphing tensegrity structures that include actuating elements. A shape optimization method is used that determines actuator placement and actuation amount necessary to achieve the measured biological displacement field of a ray. Lastly, an experimental manta ray wing is presented that measures the static and dynamic pressure field acting on the ray's wings during a normal flapping cycle.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womersley, J.; DiGiacomo, N.; Killian, K.
1990-04-01
Detailed detector design has traditionally been divided between engineering optimization for structural integrity and subsequent physicist evaluation. The availability of CAD systems for engineering design enables the tasks to be integrated by providing tools for particle simulation within the CAD system. We believe this will speed up detector design and avoid problems due to the late discovery of shortcomings in the detector. This could occur because of the slowness of traditional verification techniques (such as detailed simulation with GEANT). One such new particle simulation tool is described. It is being used with the I-DEAS CAD package for SSC detector designmore » at Martin-Marietta Astronautics and is to be released through the SSC Laboratory.« less
Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M; Lloyd, S AM; Townson, R
2014-08-15
This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less
NASA Astrophysics Data System (ADS)
Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
2018-03-01
Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.
Hu, Ting; Guo, Yan-Yun; Zhou, Qin-Fan; Zhong, Xian-Ke; Zhu, Liang; Piao, Jin-Hua; Chen, Jian; Jiang, Jian-Guo
2012-09-01
Eclipta prostrasta L. is a traditional Chinese medicine herb, which is rich in saponins and has strong antiviral and antitumor activities. An ultrasonic-assisted extraction (UAE) technique was developed for the fast extraction of saponins from E. prostrasta. The content of total saponins in E. prostrasta was determined using UV/vis spectrophotometric methods. Several influential parameters like ethanol concentration, extraction time, temperature, and liquid/solid ratio were investigated for the optimization of the extraction using single factor and Box-Behnken experimental designs. Extraction conditions were optimized for maximum yield of total saponins in E. prostrasta using response surface methodology (RSM) with 4 independent variables at 3 levels of each variable. Results showed that the optimization conditions for saponins extraction were: ethanol concentration 70%, extraction time 3 h, temperature 70 °C, and liquid/solid ratio 14:1. Corresponding saponins content was 2.096%. The mathematical model developed was found to fit well with the experimental data. Practical Application: Although there are wider applications of Eclipta prostrasta L. as a functional food or traditional medicine due to its various bioactivities, these properties are limited by its crude extracts. Total saponins are the main active ingredient of E. prostrasta. This research has optimized the extraction conditions of total saponins from E. prostrasta, which will provide useful reference information for further studies, and offer related industries with helpful guidance in practice. © 2012 Institute of Food Technologists®
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Castejón, Natalia; Luna, Pilar; Señoráns, Francisco J
2018-04-01
The edible oil processing industry involves large losses of organic solvent into the atmosphere and long extraction times. In this work, fast and environmentally friendly alternatives for the production of echium oil using green solvents are proposed. Advanced extraction techniques such as Pressurized Liquid Extraction (PLE), Microwave Assisted Extraction (MAE) and Ultrasound Assisted Extraction (UAE) were evaluated to efficiently extract omega-3 rich oil from Echium plantagineum seeds. Extractions were performed with ethyl acetate, ethanol, water and ethanol:water to develop a hexane-free processing method. Optimal PLE conditions with ethanol at 150 °C during 10 min produced a very similar oil yield (31.2%) to Soxhlet using hexane for 8 h (31.3%). UAE optimized method with ethanol at mild conditions (55 °C) produced a high oil yield (29.1%). Consequently, advanced extraction techniques showed good lipid yields and furthermore, the produced echium oil had the same omega-3 fatty acid composition than traditionally extracted oil. Copyright © 2017 Elsevier Ltd. All rights reserved.
De Filippis, Luigi Alberto Ciro; Serio, Livia Maria; Galietti, Umberto
2017-01-01
Friction Stir Welding (FSW) is a solid-state welding process, based on frictional and stirring phenomena, that offers many advantages with respect to the traditional welding methods. However, several parameters can affect the quality of the produced joints. In this work, an experimental approach has been used for studying and optimizing the FSW process, applied on 5754-H111 aluminum plates. In particular, the thermal behavior of the material during the process has been investigated and two thermal indexes, the maximum temperature and the heating rate of the material, correlated to the frictional power input, were investigated for different process parameters (the travel and rotation tool speeds) configurations. Moreover, other techniques (micrographs, macrographs and destructive tensile tests) were carried out for supporting in a quantitative way the analysis of the quality of welded joints. The potential of thermographic technique has been demonstrated both for monitoring the FSW process and for predicting the quality of joints in terms of tensile strength. PMID:29019948
Design optimization of highly asymmetrical layouts by 2D contour metrology
NASA Astrophysics Data System (ADS)
Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.
2018-03-01
As design pitch shrinks to the resolution limit of up-to-date optical lithography technology, the Critical Dimension (CD) variation tolerance has been dramatically decreased for ensuring the functionality of device. One of critical challenges associates with the narrower CD tolerance for whole chip area is the proximity effect control on asymmetrical layout environments. To fulfill the tight CD control of complex features, the Critical Dimension Scanning Electron Microscope (CD-SEM) based measurement results for qualifying process window and establishing the Optical Proximity Correction (OPC) model become insufficient, thus 2D contour extraction technique [1-5] has been an increasingly important approach for complementing the insufficiencies of traditional CD measurement algorithm. To alleviate the long cycle time and high cost penalties for product verification, manufacturing requirements are better to be well handled at design stage to improve the quality and yield of ICs. In this work, in-house 2D contour extraction platform was established for layout design optimization of 39nm half-pitch Self-Aligned Double Patterning (SADP) process layer. Combining with the adoption of Process Variation Band Index (PVBI), the contour extraction platform enables layout optimization speedup as comparing to traditional methods. The capabilities of identifying and handling lithography hotspots in complex layout environments of 2D contour extraction platform allow process window aware layout optimization to meet the manufacturing requirements.
NASA Astrophysics Data System (ADS)
Cao, Lu; Qiao, Dong; Xu, Jingwen
2018-02-01
Sub-Optimal Artificial Potential Function Sliding Mode Control (SOAPF-SMC) is proposed for the guidance and control of spacecraft rendezvous considering the obstacles avoidance, which is derived based on the theories of artificial potential function (APF), sliding mode control (SMC) and state dependent riccati equation (SDRE) technique. This new methodology designs a new improved APF to describe the potential field. It can guarantee the value of potential function converge to zero at the desired state. Moreover, the nonlinear terminal sliding mode is introduced to design the sliding mode surface with the potential gradient of APF, which offer a wide variety of controller design alternatives with fast and finite time convergence. Based on the above design, the optimal control theory (SDRE) is also employed to optimal the shape parameter of APF, in order to add some degree of optimality in reducing energy consumption. The new methodology is applied to spacecraft rendezvous with the obstacles avoidance problem, which is simulated to compare with the traditional artificial potential function sliding mode control (APF-SMC) and SDRE to evaluate the energy consumption and control precision. It is demonstrated that the presented method can avoiding dynamical obstacles whilst satisfying the requirements of autonomous rendezvous. In addition, it can save more energy than the traditional APF-SMC and also have better control accuracy than the SDRE.
Heyde, Brecht; Bottenus, Nick; D'hooge, Jan; Trahey, Gregg E
2017-02-01
The transverse oscillation (TO) technique can improve the estimation of tissue motion perpendicular to the ultrasound beam direction. TOs can be introduced using plane wave (PW) insonification and bilobed Gaussian apodization (BA) on receive (abbreviated as PWTO). Furthermore, the TO frequency of PWTO can be doubled after a heterodyning demodulation process is performed (abbreviated as PWTO*). This paper is concerned with identifying the limitations of the PWTO technique in the specific context of myocardial deformation imaging with phased arrays and investigating the conditions in which it remains advantageous over traditional focused (FOC) beamforming. For this purpose, several tissue phantoms were simulated using Field II, undergoing a wide range of displacement magnitudes and modes (lateral, axial, and rotational motions). The Cramer-Rao lower bound was used to optimize TO beamforming parameters and theoretically predict the fundamental tracking performance limits associated with the FOC, PWTO, and PWTO* beamforming scenarios. This framework was extended to also predict the performance for BA functions that are windowed by the physical aperture of the transducer, leading to higher lateral oscillations. It was found that windowed BA functions resulted in lower jitter errors compared with traditional BA functions. PWTO* outperformed FOC at all investigated signal-to-noise ratio (SNR) levels but only up to a certain displacement, with the advantage rapidly decreasing when the SNR increased. These results suggest that PWTO* improves lateral tracking performance, but only when interframe displacements remain relatively low. This paper concludes by translating these findings into a clinical environment by suggesting optimal scanner settings.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
NASA Astrophysics Data System (ADS)
Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.
2018-06-01
This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.
Load Balancing in Multi Cloud Computing Environment with Genetic Algorithm
NASA Astrophysics Data System (ADS)
Vhansure, Fularani; Deshmukh, Apurva; Sumathy, S.
2017-11-01
Cloud is a pool of resources that is available on pay per use model. It provides services to the user which is increasing rapidly. Load balancing is an issue because it cannot handle so many requests at a time. It is also known as NP complete problem. In traditional system the functions consist of various parameter values to maximise it in order to achieve best optimal individualsolutions. Challenge is when there are many parameters of solutionsin the system space. Another challenge is to optimize the function which is much more complex. In this paper, various techniques to handle load balancing virtually (VM) as well as physically (nodes) using genetic algorithm is discussed.
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X
2015-12-26
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X.
2015-01-01
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols. PMID:26712764
Cheng, Xu-Dong; Feng, Liang; Gu, Jun-Fei; Zhang, Ming-Hua; Jia, Xiao-Bin
2014-11-01
Chinese medicine prescriptions are the wisdom outcomes of traditional Chinese medicine (TCM) clinical treatment determinations which based on differentiation of symptoms and signs. Chinese medicine prescriptions are also the basis of secondary exploitation of TCM. The study on prescription helps to understand the material basis of its efficacy, pharmacological mechanism, which is an important guarantee for the modernization of traditional Chinese medicine. Currently, there is not yet dissertation n the method and technology system of basic research on the prescription of Chinese medicine. This paper focuses on how to build an effective system of prescription research technology. Based on "component structure" theory, a technology system contained four-step method that "prescription analysis, the material basis screening, the material basis of analysis and optimization and verify" was proposed. The technology system analyzes the material basis of the three levels such as Chinese medicine pieces, constituents and the compounds which could respect the overall efficacy of Chinese medicine. Ideas of prescription optimization, remodeling are introduced into the system. The technology system is the combination of the existing research and associates with new techniques and methods, which used for explore the research thought suitable for material basis research and prescription remodeling. The system provides a reference for the secondary development of traditional Chinese medicine, and industrial upgrading.
Dc microgrid stabilization through fuzzy control of interleaved, heterogeneous storage elements
NASA Astrophysics Data System (ADS)
Smith, Robert David
As microgrid power systems gain prevalence and renewable energy comprises greater and greater portions of distributed generation, energy storage becomes important to offset the higher variance of renewable energy sources and maximize their usefulness. One of the emerging techniques is to utilize a combination of lead-acid batteries and ultracapacitors to provide both short and long-term stabilization to microgrid systems. The different energy and power characteristics of batteries and ultracapacitors imply that they ought to be utilized in different ways. Traditional linear controls can use these energy storage systems to stabilize a power grid, but cannot effect more complex interactions. This research explores a fuzzy logic approach to microgrid stabilization. The ability of a fuzzy logic controller to regulate a dc bus in the presence of source and load fluctuations, in a manner comparable to traditional linear control systems, is explored and demonstrated. Furthermore, the expanded capabilities (such as storage balancing, self-protection, and battery optimization) of a fuzzy logic system over a traditional linear control system are shown. System simulation results are presented and validated through hardware-based experiments. These experiments confirm the capabilities of the fuzzy logic control system to regulate bus voltage, balance storage elements, optimize battery usage, and effect self-protection.
Mapping 3D genome architecture through in situ DNase Hi-C.
Ramani, Vijay; Cusanovich, Darren A; Hause, Ronald J; Ma, Wenxiu; Qiu, Ruolan; Deng, Xinxian; Blau, C Anthony; Disteche, Christine M; Noble, William S; Shendure, Jay; Duan, Zhijun
2016-11-01
With the advent of massively parallel sequencing, considerable work has gone into adapting chromosome conformation capture (3C) techniques to study chromosomal architecture at a genome-wide scale. We recently demonstrated that the inactive murine X chromosome adopts a bipartite structure using a novel 3C protocol, termed in situ DNase Hi-C. Like traditional Hi-C protocols, in situ DNase Hi-C requires that chromatin be chemically cross-linked, digested, end-repaired, and proximity-ligated with a biotinylated bridge adaptor. The resulting ligation products are optionally sheared, affinity-purified via streptavidin bead immobilization, and subjected to traditional next-generation library preparation for Illumina paired-end sequencing. Importantly, in situ DNase Hi-C obviates the dependence on a restriction enzyme to digest chromatin, instead relying on the endonuclease DNase I. Libraries generated by in situ DNase Hi-C have a higher effective resolution than traditional Hi-C libraries, which makes them valuable in cases in which high sequencing depth is allowed for, or when hybrid capture technologies are expected to be used. The protocol described here, which involves ∼4 d of bench work, is optimized for the study of mammalian cells, but it can be broadly applicable to any cell or tissue of interest, given experimental parameter optimization.
Generating compact classifier systems using a simple artificial immune system.
Leung, Kevin; Cheong, France; Cheong, Christopher
2007-10-01
Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers.
Nietubyc, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; ...
2018-02-14
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the leadmore » photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. In conclusion, the quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nietubyc, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the leadmore » photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. In conclusion, the quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.« less
NASA Astrophysics Data System (ADS)
Wu, Dongjun
Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.
NASA Astrophysics Data System (ADS)
George, Paul; Kemeny, Andras; Colombet, Florent; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa
2014-02-01
Immersive digital project reviews consist in using virtual reality (VR) as a tool for discussion between various stakeholders of a project. In the automotive industry, the digital car prototype model is the common thread that binds them. It is used during immersive digital project reviews between designers, engineers, ergonomists, etc. The digital mockup is also used to assess future car architecture, habitability or perceived quality requirements with the aim to reduce using physical mockups for optimized cost, delay and quality efficiency. Among the difficulties identified by the users, handling the mockup is a major one. Inspired by current uses of nomad devices (multi-touch gestures, IPhone UI look'n'feel and AR applications), we designed a navigation technique taking advantage of these popular input devices: Space scrolling allows moving around the mockup. In this paper, we present the results of a study we conducted on the usability and acceptability of the proposed smartphone-based interaction metaphor compared to traditional technique and we provide indications of the most efficient choices for different use-cases accordingly. It was carried out in a traditional 4-sided CAVE and its purpose is to assess a chosen set of interaction techniques to be implemented in Renault's new 5-sides 4K x 4K wall high performance CAVE. The proposed new metaphor using nomad devices is well accepted by novice VR users and future implementation should allow an efficient industrial use. Their use is an easy and user friendly alternative of the existing traditional control devices such as a joystick.
NASA Astrophysics Data System (ADS)
Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki
2018-05-01
We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.
Dynamic positioning configuration and its first-order optimization
NASA Astrophysics Data System (ADS)
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu
2014-02-01
Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.
Iodosodalite Waste Forms from Low-Temperature Aqueous Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Junghune; Chong, Saehwa; Riley, Brian J.
ABSTRACT Nuclear energy is one option to meet rising electricity demands, although one concern of this technology is the proper capture and storage of radioisotopes produced during fission processes. One of the more difficult radioisotopes is 129I due to its volatility and poor solubility in traditional waste forms such as borosilicate glass. Iodosodalite has been previously proposed as a viable candidate to immobilize iodine due to high iodine loading and good chemical durability. Iodosodalite was traditionally synthesized using solid state and hydrothermal techniques, but this paper discusses an aqueous synthesis approach to optimize and maximize the iodosodalite yield. Products weremore » pressed into pellets and fired with glass binders. Chemical durability and iodine retention results are included.« less
Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu
2013-01-01
With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.
Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.
Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve
2008-04-01
A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Brusati, R; Giannì, A B
2005-12-01
The authors describe a surgical technique alternative to traditional pre-surgical orthodontics in order to increase the apical base in mandibular retrusion (class II, division I). This subapical osteotomy, optimizing inferior incisal axis without dental extractions and a long orthodontic treatment, associated to genioplasty permits to obtain an ideal labio-dento-mental morphology. This procedure avoids in some cases the need of a mandibular advancement and, if necessary, it reduces his entity with obvious advantages.
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Retail video analytics: an overview and survey
NASA Astrophysics Data System (ADS)
Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang
2013-03-01
Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.
Instrumentation for studying binder burnout in an immobilized plutonium ceramic wasteform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, M; Pugh, D; Herman, C
The Plutonium Immobilization Program produces a ceramic wasteform that utilizes organic binders. Several techniques and instruments were developed to study binder burnout on full size ceramic samples in a production environment. This approach provides a method for developing process parameters on production scale to optimize throughput, product quality, offgas behavior, and plant emissions. These instruments allow for offgas analysis, large-scale TGA, product quality observation, and thermal modeling. Using these tools, results from lab-scale techniques such as laser dilametry studies and traditional TGA/DTA analysis can be integrated. Often, the sintering step of a ceramification process is the limiting process step thatmore » controls the production throughput. Therefore, optimization of sintering behavior is important for overall process success. Furthermore, the capabilities of this instrumentation allows better understanding of plant emissions of key gases: volatile organic compounds (VOCs), volatile inorganics including some halide compounds, NO{sub x}, SO{sub x}, carbon dioxide, and carbon monoxide.« less
Optimized Beam Sculpting with Generalized Fringe-rate Filters
NASA Astrophysics Data System (ADS)
Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina
2016-03-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camingue, Pamela; Christian, Rochelle; Ng, Davin
The purpose of this study was to compare 4 different external beam radiation therapy treatment techniques for the treatment of T1-2, N0, M0 glottic cancers: traditional lateral beams with wedges (3D), 5-field intensity-modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT), and proton therapy. Treatment plans in each technique were created for 10 patients using consistent planning parameters. The photon treatment plans were optimized using Philips Pinnacle{sub 3} v.9 and the IMRT and VMAT plans used the Direct Machine Parameter Optimization algorithm. The proton treatment plans were optimized using Varian Eclipse Proton v.8.9. The prescription used for each plan wasmore » 63 Gy in 28 fractions. The contours for spinal cord, right carotid artery, left carotid artery, and normal tissue were created with respect to the patient's bony anatomy so that proper comparisons of doses could be made with respect to volume. An example of the different isodose distributions will be shown. The data collection for comparison purposes includes: clinical treatment volume coverage, dose to spinal cord, dose to carotid arteries, and dose to normal tissue. Data comparisons will be displayed graphically showing the maximum, mean, median, and ranges of doses.« less
NASA Astrophysics Data System (ADS)
Chen, Xiaoguang; Liang, Lin; Liu, Fei; Xu, Guanghua; Luo, Ailing; Zhang, Sicong
2012-05-01
Nowadays, Motor Current Signature Analysis (MCSA) is widely used in the fault diagnosis and condition monitoring of machine tools. However, although the current signal has lower SNR (Signal Noise Ratio), it is difficult to identify the feature frequencies of machine tools from complex current spectrum that the feature frequencies are often dense and overlapping by traditional signal processing method such as FFT transformation. With the study in the Motor Current Signature Analysis (MCSA), it is found that the entropy is of importance for frequency identification, which is associated with the probability distribution of any random variable. Therefore, it plays an important role in the signal processing. In order to solve the problem that the feature frequencies are difficult to be identified, an entropy optimization technique based on motor current signal is presented in this paper for extracting the typical feature frequencies of machine tools which can effectively suppress the disturbances. Some simulated current signals were made by MATLAB, and a current signal was obtained from a complex gearbox of an iron works made in Luxembourg. In diagnosis the MCSA is combined with entropy optimization. Both simulated and experimental results show that this technique is efficient, accurate and reliable enough to extract the feature frequencies of current signal, which provides a new strategy for the fault diagnosis and the condition monitoring of machine tools.
New developments for the detection and treatment of cardiac vasculopathy.
Clerkin, Kevin J; Ali, Ziad A; Mancini, Donna M
2017-02-15
Cardiac allograft vasculopathy (CAV) is a major limitation to long-term survival after heart transplantation. Innovative new techniques to diagnose CAV have been applied to detect disease. This review will examine the current diagnostic and treatment options available to clinicians for CAV. Diagnostic modalities addressing the pathophysiology underlying CAV (arterial wall thickening and decreased coronary blood flow) improve diagnostic sensitivity when compared to traditional (angiography and dobutamine stress echocardiography) techniques. Limited options are available to prevent and treat CAV; however, progress has been made in making an earlier and more accurate diagnosis. Future research is needed to identify the optimal time to modify immunosuppression and investigate novel treatments for CAV.
Mahan, Angel F; McEvoy, Matthew D; Gravenstein, Nikolaus
2016-04-01
In modern practice, real-time ultrasound guidance is commonly employed for the placement of internal jugular vein catheters. With a new tool, such as ultrasound, comes the opportunity to refine and further optimize the ultrasound view during jugular vein catheterization. We describe jugular vein access techniques and use the long-axis view as an alternative to the commonly employed short-axis cross-section view for internal jugular vein access and cannulation. The long-axis ultrasound-guided internal jugular vein approach for internal jugular vein cannulation is a useful alternative technique that can provide better needle tip and guidewire visualization than the more traditional short-axis ultrasound view.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Reirradiation of head and neck cancer using modern highly conformal techniques.
Ho, Jennifer C; Phan, Jack
2018-04-23
Locoregional disease recurrence or development of a second primary cancer after definitive radiotherapy for head and neck cancers remains a treatment challenge. Reirradiation utilizing traditional techniques has been limited by concern for serious toxicity. With the advent of newer, more precise radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT), proton radiotherapy, and stereotactic body radiotherapy (SBRT), there has been renewed interest in curative-intent head and neck reirradiation. However, as most studies were retrospective, single-institutional experiences, the optimal modality is not clear. We provide a comprehensive review of the outcomes of relevant studies using these 3 head and neck reirradiation techniques, followed by an analysis and comparison of the toxicity, tumor control, concurrent systemic therapy, and prognostic factors. Overall, there is evidence that IMRT, proton therapy, and SBRT reirradiation are feasible treatment options that offer a chance for durable local control and survival. Prospective studies, particularly randomized trials, are needed. © 2018 Wiley Periodicals, Inc.
Water supply pipe dimensioning using hydraulic power dissipation
NASA Astrophysics Data System (ADS)
Sreemathy, J. R.; Rashmi, G.; Suribabu, C. R.
2017-07-01
Proper sizing of the pipe component of water distribution networks play an important role in the overall design of the any water supply system. Several approaches have been applied for the design of networks from an economical point of view. Traditional optimization techniques and population based stochastic algorithms are widely used to optimize the networks. But the use of these approaches is mostly found to be limited to the research level due to difficulties in understanding by the practicing engineers, design engineers and consulting firms. More over due to non-availability of commercial software related to the optimal design of water distribution system,it forces the practicing engineers to adopt either trial and error or experience-based design. This paper presents a simple approach based on power dissipation in each pipeline as a parameter to design the network economically, but not to the level of global minimum cost.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
On advanced configuration enhance adaptive system optimization
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei
2017-10-01
For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.
Transmission Scheduling and Routing Algorithms for Delay Tolerant Networks
NASA Technical Reports Server (NTRS)
Dudukovich, Rachel; Raible, Daniel E.
2016-01-01
The challenges of data processing, transmission scheduling and routing within a space network present a multi-criteria optimization problem. Long delays, intermittent connectivity, asymmetric data rates and potentially high error rates make traditional networking approaches unsuitable. The delay tolerant networking architecture and protocols attempt to mitigate many of these issues, yet transmission scheduling is largely manually configured and routes are determined by a static contact routing graph. A high level of variability exists among the requirements and environmental characteristics of different missions, some of which may allow for the use of more opportunistic routing methods. In all cases, resource allocation and constraints must be balanced with the optimization of data throughput and quality of service. Much work has been done researching routing techniques for terrestrial-based challenged networks in an attempt to optimize contact opportunities and resource usage. This paper examines several popular methods to determine their potential applicability to space networks.
Haverkort, J J Mark; Leenen, Luke P H
2017-10-01
Presently used evaluation techniques rely on 3 traditional dimensions: reports from observers, registration system data, and observational cameras. Some of these techniques are observer-dependent and are not reproducible for a second review. This proof-of-concept study aimed to test the feasibility of extending evaluation to a fourth dimension, the patient's perspective. Footage was obtained during a large, full-scale hospital trauma drill. Two mock victims were equipped with point-of-view cameras filming from the patient's head. Based on the Major Incident Hospital's first experience during the drill, a protocol was developed for a prospective, standardized method to evaluate a hospital's major incident response from the patient's perspective. The protocol was then tested in a second drill for its feasibility. New insights were gained after review of the footage. The traditional observer missed some of the evaluation points, which were seen on the point-of-view cameras. The information gained from the patient's perspective proved to be implementable into the designed protocol. Use of point-of-view camera recordings from a mock patient's perspective is a valuable addition to traditional evaluation of trauma drills and trauma care. Protocols should be designed to optimize and objectify judgement of such footage. (Disaster Med Public Health Preparedness. 2017;11:594-599).
Optimization of lattice surgery is NP-hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon J.
2017-09-01
The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Nano-Al Based Energetics: Rapid Heating Studies and a New Preparation Technique
NASA Astrophysics Data System (ADS)
Sullivan, Kyle; Kuntz, Josh; Gash, Alex; Zachariah, Michael
2011-06-01
Nano-Al based thermites have become an attractive alternative to traditional energetic formulations due to their increased energy density and high reactivity. Understanding the intrinsic reaction mechanism has been a difficult task, largely due to the lack of experimental techniques capable of rapidly and uniform heating a sample (~104- 108 K/s). The current work presents several studies on nano-Al based thermites, using rapid heating techniques. A new mechanism termed a Reactive Sintering Mechanism is proposed for nano-Al based thermites. In addition, new experimental techniques for nanocomposite thermite deposition onto thin Pt electrodes will be discussed. This combined technique will offer more precise control of the deposition, and will serve to further our understanding of the intrinsic reaction mechanism of rapidly heated energetic systems. An improved mechanistic understanding will lead to the development of optimized formulations and architectures. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century.
Ochiai, Derek H; Adib, Farshad
2015-04-01
Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE.
The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century
Ochiai, Derek H.; Adib, Farshad
2015-01-01
Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE. PMID:26052490
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Song, Hong-Tao; Zhang, Qian; Jiang, Peng; Guo, Tao; Chen, Da-Wei; He, Zhong-Gui
2006-09-01
To prepare a sustained-release formulation of traditional Chinese medicine compound recipe by adopting time-controlled release techniques. Shuxiong tablets were chosen as model drug. The prescription and technique of core tablets were formulated with selecting disintegrating time and swelling volume of core tablets in water as index. The time-controlled release tablets were prepared by adopting press-coated techniques, using PEG6000, HCO and EVA as coating materials. The influences of compositions, preparation process and dissolution conditions in vitro on the lag time (T(lag)) of drug release were investigated. The composition of core tablets was as follow: 30% of drug, 50% MCC and 20% CMS-Na. The T(lag) of time-controlled release tablets was altered remarkably by PEG6000 content of the outer layer, the amount of outer layer and hardness of tablet. The viscosity of dissolution media and basket rotation had less influence on the T(lag) but more on rate of drug release. The core tablets pressed with the optimized composition had preferable swelling and disintegrating properties. The shuxiong sustained-release formulations which contained core tablet and two kinds of time-controlled release tablets with 3 h and 6 h of T(lag) could release drug successively at 0 h, 3 h and 6 h in vitro. The technique made it possible that various components with extremely different physicochemical properties in these preparations could release synchronously.
Schulz, Alexandra; Daali, Samira; Javed, Mehreen; Fuchs, Paul Christian; Brockmann, Michael; Igressa, Alhadi; Charalampaki, Patra
2016-12-01
At present, no ideal diagnostic tools exist in the market to excise cancer tissue with the required safety margins and to achieve optimal aesthetic results using tissue-conserving techniques. In this prospective study, confocal laser endomicroscopy (CLE) and the traditional gold standard of magnifying glasses (MG) were compared regarding the boundaries of in vivo basal cell carcinoma and squamous cell carcinoma. Tumour diameters defined by both methods were measured and compared with those determined by histopathological examination. Nineteen patients were included in the study. The CLE technique was found to be superior to excisional margins based on MG only. Re-excision was required in 68% of the cases following excision based on MG evaluation, but only in 27% of the cases for whom excision margins were based on CLE. Our results are promising regarding the distinction between tumour and healthy surrounding tissue, and indicate that presurgical mapping of basal cell carcinoma and squamous cell carcinoma is possible. The tool itself should be developed further with special attention to early detection of skin cancer.
Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song
2013-09-01
The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the
Residential roof condition assessment system using deep learning
NASA Astrophysics Data System (ADS)
Wang, Fan; Kerekes, John P.; Xu, Zhuoyi; Wang, Yandong
2018-01-01
The emergence of high resolution (HR) and ultra high resolution (UHR) airborne remote sensing imagery is enabling humans to move beyond traditional land cover analysis applications to the detailed characterization of surface objects. A residential roof condition assessment method using techniques from deep learning is presented. The proposed method operates on individual roofs and divides the task into two stages: (1) roof segmentation, followed by (2) condition classification of the segmented roof regions. As the first step in this process, a self-tuning method is proposed to segment the images into small homogeneous areas. The segmentation is initialized with simple linear iterative clustering followed by deep learned feature extraction and region merging, with the optimal result selected by an unsupervised index, Q. After the segmentation, a pretrained residual network is fine-tuned on the augmented roof segments using a proposed k-pixel extension technique for classification. The effectiveness of the proposed algorithm was demonstrated on both HR and UHR imagery collected by EagleView over different study sites. The proposed algorithm has yielded promising results and has outperformed traditional machine learning methods using hand-crafted features.
Martin, Simon S; Wichmann, Julian L; Weyer, Hendrik; Albrecht, Moritz H; D'Angelo, Tommaso; Leithner, Doris; Lenga, Lukas; Booz, Christian; Scholtz, Jan-Erik; Bodelle, Boris; Vogl, Thomas J; Hammerstingl, Renate
2017-10-01
The aim of this study was to investigate the impact of noise-optimized virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with cutaneous malignant melanoma at thoracoabdominal dual-energy computed tomography (DECT). Seventy-six patients (48 men; 66.6±13.8years) with metastatic cutaneous malignant melanoma underwent DECT of the thorax and abdomen. Images were post-processed with standard linear blending (M_0.6), traditional virtual monoenergetic (VMI), and VMI+ technique. VMI and VMI+ images were reconstructed in 10-keV intervals from 40 to 100keV. Attenuation measurements were performed in cutaneous melanoma lesions, as well as in regional lymph node, subcutaneous and in-transit metastases to calculate objective signal-to-noise (SNR) and contrast-to-noise (CNR) ratios. Five-point scales were used to evaluate overall image quality and lesion delineation by three radiologists with different levels of experience. Objective indices SNR and CNR were highest at 40-keV VMI+ series (5.6±2.6 and 12.4±3.4), significantly superior to all other reconstructions (all P<0.001). Qualitative image parameters showed highest values for 50-keV and 60-keV VMI+ reconstructions (median 5, respectively; P≤0.019) regarding overall image quality. Moreover, qualitative assessment of lesion delineation peaked in 40-keV VMI+ (median 5) and 50-keV VMI+ (median 4; P=0.055), significantly superior to all other reconstructions (all P<0.001). Low-keV noise-optimized VMI+ reconstructions substantially increase quantitative and qualitative image parameters, as well as subjective lesion delineation compared to standard image reconstruction and traditional VMI in patients with cutaneous malignant melanoma at thoracoabdominal DECT. Copyright © 2017 Elsevier B.V. All rights reserved.
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Design and optimization of color lookup tables on a simplex topology.
Monga, Vishal; Bala, Raja; Mo, Xuan
2012-04-01
An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.
Wongvibulsin, Shannon; Lee, Suzie Seoyang; Hui, Ka-Kit
2012-01-01
Eastern and Western approaches to nutrition are unique and possess both strengths and weaknesses. Blending the best of both techniques will allow for the development of an integrative nutrition system that is more effective than either tradition on its own. The Western view to nutrition is already adopting certain attributes of the Eastern medicine philosophy as exemplified by the progression towards individualized nutrition through methods such as nutrigenetics. Nevertheless, many differences still remain between Eastern and Western nutritional concepts. Becoming fluent in both Western and Eastern methodologies can ensure the extraction of the best from both techniques for the development of a comprehensive, systematic, and holistic nutritional approach to achieve optimal health.
Liu, Rui; Milkie, Daniel E; Kerlin, Aaron; MacLennan, Bryan; Ji, Na
2014-01-27
In traditional zonal wavefront sensing for adaptive optics, after local wavefront gradients are obtained, the entire wavefront can be calculated by assuming that the wavefront is a continuous surface. Such an approach will lead to sub-optimal performance in reconstructing wavefronts which are either discontinuous or undersampled by the zonal wavefront sensor. Here, we report a new method to reconstruct the wavefront by directly measuring local wavefront phases in parallel using multidither coherent optical adaptive technique. This method determines the relative phases of each pupil segment independently, and thus produces an accurate wavefront for even discontinuous wavefronts. We implemented this method in an adaptive optical two-photon fluorescence microscopy and demonstrated its superior performance in correcting large or discontinuous aberrations.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge
2017-11-11
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.
Determining Training Device Requirements in Army Aviation Systems
NASA Technical Reports Server (NTRS)
Poumade, M. L.
1984-01-01
A decision making methodology which applies the systems approach to the training problem is discussed. Training is viewed as a total system instead of a collection of individual devices and unrelated techniques. The core of the methodology is the use of optimization techniques such as the transportation algorithm and multiobjective goal programming with training task and training device specific data. The role of computers, especially automated data bases and computer simulation models, in the development of training programs is also discussed. The approach can provide significant training enhancement and cost savings over the more traditional, intuitive form of training development and device requirements process. While given from an aviation perspective, the methodology is equally applicable to other training development efforts.
Wongvibulsin, Shannon; Lee, Suzie Seoyang; Hui, Ka-Kit
2012-01-01
Eastern and Western approaches to nutrition are unique and possess both strengths and weaknesses. Blending the best of both techniques will allow for the development of an integrative nutrition system that is more effective than either tradition on its own. The Western view to nutrition is already adopting certain attributes of the Eastern medicine philosophy as exemplified by the progression towards individualized nutrition through methods such as nutrigenetics. Nevertheless, many differences still remain between Eastern and Western nutritional concepts. Becoming fluent in both Western and Eastern methodologies can ensure the extraction of the best from both techniques for the development of a comprehensive, systematic, and holistic nutritional approach to achieve optimal health. PMID:24716109
Using Physical Organic Chemistry To Shape the Course of Electrochemical Reactions.
Moeller, Kevin D
2018-05-09
While organic electrochemistry can look quite different to a chemist not familiar with the technique, the reactions are at their core organic reactions. As such, they are developed and optimized using the same physical organic chemistry principles employed during the development of any other organic reaction. Certainly, the electron transfer that triggers the reactions can require a consideration of new "wrinkles" to those principles, but those considerations are typically minimal relative to the more traditional approaches needed to manipulate the pathways available to the reactive intermediates formed downstream of that electron transfer. In this review, three very different synthetic challenges-the generation and trapping of radical cations, the development of site-selective reactions on microelectrode arrays, and the optimization of current in a paired electrolysis-are used to illustrate this point.
Microbial bioinformatics for food safety and production
Alkema, Wynand; Boekhorst, Jos; Wels, Michiel
2016-01-01
In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput ‘omics’ technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety. PMID:26082168
Molecular Diagnostic Testing for Aspergillus
Powers-Fletcher, Margaret V.
2016-01-01
The direct detection of Aspergillus nucleic acid in clinical specimens has the potential to improve the diagnosis of aspergillosis by offering more rapid and sensitive identification of invasive infections than is possible with traditional techniques, such as culture or histopathology. Molecular tests for Aspergillus have been limited historically by lack of standardization and variable sensitivities and specificities. Recent efforts have been directed at addressing these limitations and optimizing assay performance using a variety of specimen types. This review provides a summary of standardization efforts and outlines the complexities of molecular testing for Aspergillus in clinical mycology. PMID:27487954
A current review of core decompression in the treatment of osteonecrosis of the femoral head.
Pierce, Todd P; Jauregui, Julio J; Elmallah, Randa K; Lavernia, Carlos J; Mont, Michael A; Nace, James
2015-09-01
The review describes the following: (1) how traditional core decompression is performed, (2) adjunctive treatments, (3) multiple percutaneous drilling technique, and (4) the overall outcomes of these procedures. Core decompression has optimal outcomes when used in the earliest, precollapse disease stages. More recent studies have reported excellent outcomes with percutaneous drilling. Furthermore, adjunct treatment methods combining core decompression with growth factors, bone morphogenic proteins, stem cells, and bone grafting have demonstrated positive results; however, larger randomized trial is needed to evaluate their overall efficacy.
Modern Computational Techniques for the HMMER Sequence Analysis
2013-01-01
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Gdowski, Andrew; Johnson, Kaitlyn; Shah, Sunil; Gryczynski, Ignacy; Vishwanatha, Jamboor; Ranjan, Amalendu
2018-02-12
The process of optimization and fabrication of nanoparticle synthesis for preclinical studies can be challenging and time consuming. Traditional small scale laboratory synthesis techniques suffer from batch to batch variability. Additionally, the parameters used in the original formulation must be re-optimized due to differences in fabrication techniques for clinical production. Several low flow microfluidic synthesis processes have been reported in recent years for developing nanoparticles that are a hybrid between polymeric nanoparticles and liposomes. However, use of high flow microfluidic synthetic techniques has not been described for this type of nanoparticle system, which we will term as nanolipomer. In this manuscript, we describe the successful optimization and functional assessment of nanolipomers fabricated using a microfluidic synthesis method under high flow parameters. The optimal total flow rate for synthesis of these nanolipomers was found to be 12 ml/min and flow rate ratio 1:1 (organic phase: aqueous phase). The PLGA polymer concentration of 10 mg/ml and a DSPE-PEG lipid concentration of 10% w/v provided optimal size, PDI and stability. Drug loading and encapsulation of a representative hydrophobic small molecule drug, curcumin, was optimized and found that high encapsulation efficiency of 58.8% and drug loading of 4.4% was achieved at 7.5% w/w initial concentration of curcumin/PLGA polymer. The final size and polydispersity index of the optimized nanolipomer was 102.11 nm and 0.126, respectively. Functional assessment of uptake of the nanolipomers in C4-2B prostate cancer cells showed uptake at 1 h and increased uptake at 24 h. The nanolipomer was more effective in the cell viability assay compared to free drug. Finally, assessment of in vivo retention in mice of these nanolipomers revealed retention for up to 2 h and were completely cleared at 24 h. In this study, we have demonstrated that a nanolipomer formulation can be successfully synthesized and easily scaled up through a high flow microfluidic system with optimal characteristics. The process of developing nanolipomers using this methodology is significant as the same optimized parameters used for small batches could be translated into manufacturing large scale batches for clinical trials through parallel flow systems.
Decomposition-Based Decision Making for Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Borer, Nicholas K.; Mavris, DImitri N.
2005-01-01
Most practical engineering systems design problems have multiple and conflicting objectives. Furthermore, the satisfactory attainment level for each objective ( requirement ) is likely uncertain early in the design process. Systems with long design cycle times will exhibit more of this uncertainty throughout the design process. This is further complicated if the system is expected to perform for a relatively long period of time, as now it will need to grow as new requirements are identified and new technologies are introduced. These points identify a need for a systems design technique that enables decision making amongst multiple objectives in the presence of uncertainty. Traditional design techniques deal with a single objective or a small number of objectives that are often aggregates of the overarching goals sought through the generation of a new system. Other requirements, although uncertain, are viewed as static constraints to this single or multiple objective optimization problem. With either of these formulations, enabling tradeoffs between the requirements, objectives, or combinations thereof is a slow, serial process that becomes increasingly complex as more criteria are added. This research proposal outlines a technique that attempts to address these and other idiosyncrasies associated with modern aerospace systems design. The proposed formulation first recasts systems design into a multiple criteria decision making problem. The now multiple objectives are decomposed to discover the critical characteristics of the objective space. Tradeoffs between the objectives are considered amongst these critical characteristics by comparison to a probabilistic ideal tradeoff solution. The proposed formulation represents a radical departure from traditional methods. A pitfall of this technique is in the validation of the solution: in a multi-objective sense, how can a decision maker justify a choice between non-dominated alternatives? A series of examples help the reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
The trade-off between morphology and control in the co-optimized design of robots.
Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.
Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Canfield, Stephen
2004-01-01
The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.
The trade-off between morphology and control in the co-optimized design of robots
Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
A Darwinian approach to control-structure design
NASA Technical Reports Server (NTRS)
Zimmerman, David C.
1993-01-01
Genetic algorithms (GA's), as introduced by Holland (1975), are one form of directed random search. The form of direction is based on Darwin's 'survival of the fittest' theories. GA's are radically different from the more traditional design optimization techniques. GA's work with a coding of the design variables, as opposed to working with the design variables directly. The search is conducted from a population of designs (i.e., from a large number of points in the design space), unlike the traditional algorithms which search from a single design point. The GA requires only objective function information, as opposed to gradient or other auxiliary information. Finally, the GA is based on probabilistic transition rules, as opposed to deterministic rules. These features allow the GA to attack problems with local-global minima, discontinuous design spaces and mixed variable problems, all in a single, consistent framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Liu, B; Li, Y
2016-06-15
Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descentmore » technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.« less
Dufield, Dawn R; Radabaugh, Melissa R
2012-02-01
There is an increased emphasis on hyphenated techniques such as immunoaffinity LC/MS/MS (IA-LC/MS/MS) or IA-LC/MRM. These techniques offer competitive advantages with respect to sensitivity and selectivity over traditional LC/MS and are complementary to ligand binding assays (LBA) or ELISA's. However, these techniques are not entirely straightforward and there are several tips and tricks to routine sample analysis. We describe here our methods and procedures for how to perform online IA-LC/MS/MS including a detailed protocol for the preparation of antibody (Ab) enrichment columns. We have included sample trapping and Ab methods. Furthermore, we highlight tips, tricks, minimal and optimal approaches. This technology has been shown to be viable for several applications, species and fluids from small molecules to proteins and biomarkers to PK assays. Copyright © 2011 Elsevier Inc. All rights reserved.
Moller, Arlen C.; Merchant, Gina; Conroy, David E.; West, Robert; Hekler, Eric B.; Kugler, Kari C.; Michie, Susan
2017-01-01
As more behavioral health interventions move from traditional to digital platforms, the application of evidence-based theories and techniques may be doubly advantageous. First, it can expedite digital health intervention development, improving efficacy, and increasing reach. Second, moving behavioral health interventions to digital platforms presents researchers with novel (potentially paradigm shifting) opportunities for advancing theories and techniques. In particular, the potential for technology to revolutionize theory refinement is made possible by leveraging the proliferation of “real-time” objective measurement and “big data” commonly generated and stored by digital platforms. Much more could be done to realize this potential. This paper offers proposals for better leveraging the potential advantages of digital health platforms, and reviews three of the cutting edge methods for doing so: optimization designs, dynamic systems modeling, and social network analysis. PMID:28058516
Machine-Learning Techniques Applied to Antibacterial Drug Discovery
Durrant, Jacob D.; Amaro, Rommie E.
2014-01-01
The emergence of drug-resistant bacteria threatens to catapult humanity back to the pre-antibiotic era. Even now, multi-drug-resistant bacterial infections annually result in millions of hospital days, billions in healthcare costs, and, most importantly, tens of thousands of lives lost. As many pharmaceutical companies have abandoned antibiotic development in search of more lucrative therapeutics, academic researchers are uniquely positioned to fill the resulting vacuum. Traditional high-throughput screens and lead-optimization efforts are expensive and labor intensive. Computer-aided drug discovery techniques, which are cheaper and faster, can accelerate the identification of novel antibiotics in an academic setting, leading to improved hit rates and faster transitions to pre-clinical and clinical testing. The current review describes two machine-learning techniques, neural networks and decision trees, that have been used to identify experimentally validated antibiotics. We conclude by describing the future directions of this exciting field. PMID:25521642
Fabrication of Fe1.1Se0.5Te0.5 bulk by a high energy ball milling technique
NASA Astrophysics Data System (ADS)
Liu, Jixing; Li, Chengshan; Zhang, Shengnan; Feng, Jianqing; Zhang, Pingxiang; Zhou, Lian
2017-11-01
Fe1.1Se0.5Te0.5 superconducting bulks were successfully synthesized by a high energy ball milling (HEBM) aided sintering technique. Two advantages of this new technique have been revealed compared with traditional solid state sintering method. One is greatly increased the density of sintered bulks. It is because the precursor powders with β-Fe(Se, Te) and δ-Fe(Se, Te) were obtained directly by the HEBM process and without formation of liquid Se (and Te), which could avoid the huge volume expansion. The other is the obvious decrease of sintering temperature and dwell time due to the effective shortened length of diffusion paths. The superconducting critical temperature Tc of 14.2 K in our sample is comparable with those in previous reports, and further optimization of chemical composition is on the way.
Rayan, Anwar; Raiyn, Jamal
2017-01-01
Cancer is considered one of the primary diseases that cause morbidity and mortality in millions of people worldwide and due to its prevalence, there is undoubtedly an unmet need to discover novel anticancer drugs. However, the traditional process of drug discovery and development is lengthy and expensive, so the application of in silico techniques and optimization algorithms in drug discovery projects can provide a solution, saving time and costs. A set of 617 approved anticancer drugs, constituting the active domain, and a set of 2,892 natural products, constituting the inactive domain, were employed to build predictive models and to index natural products for their anticancer bioactivity. Using the iterative stochastic elimination optimization technique, we obtained a highly discriminative and robust model, with an area under the curve of 0.95. Twelve natural products that scored highly as potential anticancer drug candidates are disclosed. Searching the scientific literature revealed that few of those molecules (Neoechinulin, Colchicine, and Piperolactam) have already been experimentally screened for their anticancer activity and found active. The other phytochemicals await evaluation for their anticancerous activity in wet lab. PMID:29121120
Rayan, Anwar; Raiyn, Jamal; Falah, Mizied
2017-01-01
Cancer is considered one of the primary diseases that cause morbidity and mortality in millions of people worldwide and due to its prevalence, there is undoubtedly an unmet need to discover novel anticancer drugs. However, the traditional process of drug discovery and development is lengthy and expensive, so the application of in silico techniques and optimization algorithms in drug discovery projects can provide a solution, saving time and costs. A set of 617 approved anticancer drugs, constituting the active domain, and a set of 2,892 natural products, constituting the inactive domain, were employed to build predictive models and to index natural products for their anticancer bioactivity. Using the iterative stochastic elimination optimization technique, we obtained a highly discriminative and robust model, with an area under the curve of 0.95. Twelve natural products that scored highly as potential anticancer drug candidates are disclosed. Searching the scientific literature revealed that few of those molecules (Neoechinulin, Colchicine, and Piperolactam) have already been experimentally screened for their anticancer activity and found active. The other phytochemicals await evaluation for their anticancerous activity in wet lab.
NASA Astrophysics Data System (ADS)
Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.
Optimizing product life cycle processes in design phase
NASA Astrophysics Data System (ADS)
Faneye, Ola. B.; Anderl, Reiner
2002-02-01
Life cycle concepts do not only serve as basis in assisting product developers understand the dependencies between products and their life cycles, they also help in identifying potential opportunities for improvement in products. Common traditional concepts focus mainly on energy and material flow across life phases, necessitating the availability of metrics derived from a reference product. Knowledge of life cycle processes won from an existing product is directly reused in its redesign. Depending on sales volume nevertheless, the environmental impact before product optimization can be substantial. With modern information technologies today, computer-aided life cycle methodologies can be applied well before product use. On the basis of a virtual prototype, life cycle processes are analyzed and optimized, using simulation techniques. This preventive approach does not only help in minimizing (or even eliminating) environmental burdens caused by product, costs incurred due to changes in real product can also be avoided. The paper highlights the relationship between product and life cycle and presents a computer-based methodology for optimizing the product life cycle during design, as presented by SFB 392: Design for Environment - Methods and Tools at Technical University, Darmstadt.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.
2007-01-01
Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.
da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro
2018-04-01
Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
Hou, Yu-Lan; Wu, Shuang; Wang, Hua; Zhao, Yong; Liao, Peng; Tian, Qing-Qing; Sun, Wen-Jian; Chen, Bo
2013-01-01
A novel rapid method for detection of the illicit beta2-agonist additives in health foods and traditional Chinese patent medicines was developed with the desorption corona beam ionization mass spectrometry (DCBI-MS) technique. The DCBI conditions including temperature and sample volume were optimized according to the resulting mass spectra intensity. Matrix effect on 9 beta2-agonists additives was not significant in the proposed rapid determination procedure. All of the 9 target molecules were detected within 1 min. Quantification was achieved based on the typical fragment ion in MS2 spectra of each analyte. The method showed good linear coefficients in the range of 1-100 mg x L(-1) for all analytes. The relative deviation values were between 14.29% and 25.13%. Ten claimed antitussive and antiasthmatic health foods and traditional Chinese patent medicines from local pharmacies were analyzed. All of them were negative with the proposed DCBI-MS method. Without tedious sample pretreatments, the developed DCBI-MS is simple, rapid and sensitive for rapid qualification and semi-quantification of the illicit beta2-agonist additives in health foods and traditional Chinese patent medicines.
Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen
In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less
Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen
In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less
Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen
In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less
Morgan, Kevin; Touitou, Jamal; Choi, Jae -Soon; ...
2016-01-15
The development and optimization of catalysts and catalytic processes requires knowledge of reaction kinetics and mechanisms. In traditional catalyst kinetic characterization, the gas composition is known at the inlet, and the exit flow is measured to determine changes in concentration. As such, the progression of the chemistry within the catalyst is not known. Technological advances in electromagnetic and physical probes have made visualizing the evolution of the chemistry within catalyst samples a reality, as part of a methodology commonly known as spatial resolution. Herein, we discuss and evaluate the development of spatially resolved techniques, including the evolutions and achievements ofmore » this growing area of catalytic research. The impact of such techniques is discussed in terms of the invasiveness of physical probes on catalytic systems, as well as how experimentally obtained spatial profiles can be used in conjunction with kinetic modeling. Moreover, some aims and aspirations for further evolution of spatially resolved techniques are considered.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guida, K; Qamar, K; Thompson, M
Purpose: The RTOG 1005 trial offered a hypofractionated arm in delivering WBRT+SIB. Traditionally, treatments were planned at our institution using field-in-field (FiF) tangents with a concurrent 3D conformal boost. With the availability of VMAT, it is possible that a hybrid VMAT-3D planning technique could provide another avenue in treating WBRT+SIB. Methods: A retrospective study of nine patients previously treated using RTOG 1005 guidelines was performed to compare FiF+3D plans with the hybrid technique. A combination of static tangents and partial VMAT arcs were used in base-dose optimization. The hybrid plans were optimized to deliver 4005cGy to the breast PTVeval andmore » 4800cGy to the lumpectomy PTVeval over 15 fractions. Plans were optimized to meet the planning goals dictated by RTOG 1005. Results: Hybrid plans yielded similar coverage of breast and lumpectomy PTVs (average D95 of 4013cGy compared to 3990cGy for conventional), while reducing the volume of high dose within the breast; the average D30 and D50 for the hybrid technique were 4517cGy and 4288cGy, compared to 4704cGy and 4377cGy for conventional planning. Hybrid plans increased conformity as well, yielding CI95% values of 1.22 and 1.54 for breast and lumpectomy PTVeval volumes; in contrast, conventional plans averaged 1.49 and 2.27, respectively. The nearby organs at risk (OARs) received more low dose with the hybrid plans due to low dose spray from the partial arcs, but all hybrid plans did meet the acceptable constraints, at a minimum, from the protocol. Treatment planning time was also reduced, as plans were inversely optimized (VMAT) rather than forward optimized. Conclusion: Hybrid-VMAT could be a solution in delivering WB+SIB, as plans yield very conformal treatment plans and maintain clinical standards in OAR sparing. For treating breast cancer patients with a simultaneously-integrated boost, Hybrid-VMAT offers superiority in dosimetric conformity and planning time as compared to FIF techniques.« less
A new optimal seam method for seamless image stitching
NASA Astrophysics Data System (ADS)
Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng
2017-07-01
A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.
Targeted Structural Optimization with Additive Manufacturing of Metals
NASA Technical Reports Server (NTRS)
Burt, Adam; Hull, Patrick
2015-01-01
The recent advances in additive manufacturing (AM) of metals have now improved the state-of-the-art such that traditionally non-producible parts can be readily produced in a cost-effective way. Because of these advances in manufacturing technology, structural optimization techniques are well positioned to supplement and advance this new technology. The goal of this project is to develop a structural design, analysis, and optimization framework combined with AM to significantly light-weight the interior of metallic structures while maintaining the selected structural properties of the original solid. This is a new state-of-the-art capability to significantly reduce mass, while maintaining the structural integrity of the original design, something that can only be done with AM. In addition, this framework will couple the design, analysis, and fabrication process, meaning that what has been designed directly represents the produced part, thus closing the loop on the design cycle and removing human iteration between design and fabrication. This fundamental concept has applications from light-weighting launch vehicle components to in situ resource fabrication.
Wang, Yan; Xi, Chengyu; Zhang, Shuai; Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach.
Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach. PMID:26147468
Mulder, Samuel A; Wunsch, Donald C
2003-01-01
The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
Design of optimized piezoelectric HDD-sliders
NASA Astrophysics Data System (ADS)
Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.
2010-04-01
As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.
Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform
NASA Astrophysics Data System (ADS)
Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.
2017-03-01
The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.
Aeroelastic optimization methodology for viscous and turbulent flows
NASA Astrophysics Data System (ADS)
Barcelos Junior, Manuel Nascimento Dias
2007-12-01
In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.
Structural Optimization in automotive design
NASA Technical Reports Server (NTRS)
Bennett, J. A.; Botkin, M. E.
1984-01-01
Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
Magnetic resonance imaging with an optical atomic magnetometer
Xu, Shoujun; Yashchuk, Valeriy V.; Donaldson, Marcus H.; Rochester, Simon M.; Budker, Dmitry; Pines, Alexander
2006-01-01
We report an approach for the detection of magnetic resonance imaging without superconducting magnets and cryogenics: optical atomic magnetometry. This technique possesses a high sensitivity independent of the strength of the static magnetic field, extending the applicability of magnetic resonance imaging to low magnetic fields and eliminating imaging artifacts associated with high fields. By coupling with a remote-detection scheme, thereby improving the filling factor of the sample, we obtained time-resolved flow images of water with a temporal resolution of 0.1 s and spatial resolutions of 1.6 mm perpendicular to the flow and 4.5 mm along the flow. Potentially inexpensive, compact, and mobile, our technique provides a viable alternative for MRI detection with substantially enhanced sensitivity and time resolution for various situations where traditional MRI is not optimal. PMID:16885210
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio
2017-01-01
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137
Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme
2015-11-02
An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.
Vector-model-supported approach in prostate plan optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
Traversari, Roberto; Goedhart, Rien; Schraagen, Jan Maarten
2013-01-01
The objective is evaluation of a traditionally designed operating room using simulation of various surgical workflows. A literature search showed that there is no evidence for an optimal operating room layout regarding the position and size of an ultraclean ventilation (UCV) canopy with a separate preparation room for laying out instruments and in which patients are induced in the operating room itself. Neither was literature found reporting on process simulation being used for this application. Many technical guidelines and designs have mainly evolved over time, and there is no evidence on whether the proposed measures are also effective for the optimization of the layout for workflows. The study was conducted by applying observational techniques to simulated typical surgical procedures. Process simulations which included complete surgical teams and equipment required for the intervention were carried out for four typical interventions. Four observers used a form to record conflicts with the clean area boundaries and the height of the supply bridge. Preferences for particular layouts were discussed with the surgical team after each simulated procedure. We established that a clean area measuring 3 × 3 m and a supply bridge height of 2.05 m was satisfactory for most situations, provided a movable operation table is used. The only cases in which conflicts with the supply bridge were observed were during the use of a surgical robot (Da Vinci) and a surgical microscope. During multiple trauma interventions, bottlenecks regarding the dimensions of the clean area will probably arise. The process simulation of four typical interventions has led to significantly different operating room layouts than were arrived at through the traditional design process. Evidence-based design, human factors, work environment, operating room, traditional design, process simulation, surgical workflowsPreferred Citation: Traversari, R., Goedhart, R., & Schraagen, J. M. (2013). Process simulation during the design process makes the difference: Process simulations applied to a traditional design. Health Environments Research & Design Journal 6(2), pp 58-76.
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
Optimal sharpening of compensated comb decimation filters: analysis and design.
Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.
Designing Industrial Networks Using Ecological Food Web Metrics.
Layton, Astrid; Bras, Bert; Weissburg, Marc
2016-10-18
Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.
Liquifying PLDLLA Anchor Fixation in Achilles Reconstruction for Insertional Tendinopathy.
Boden, Stephanie A; Boden, Allison L; Mignemi, Danielle; Bariteau, Jason T
2018-04-01
Insertional Achilles tendinopathy (IAT) is a frequent cause of posterior heel pain and is often associated with Haglund's deformity. Surgical correction for refractory cases of IAT has been well studied; however, the method of tendon fixation to bone in these procedures remains controversial, and to date, no standard technique has been identified for tendon fixation in these surgeries. Often, after Haglund's resection, there is large exposed cancellous surface for Achilles reattachment, which may require unique fixation to optimize outcomes. Previous studies have consistently demonstrated improved patient outcomes after Achilles tendon reconstruction with early rehabilitation with protected weight bearing, evidencing the need for a strong and stable anchoring of the Achilles tendon that allows early weight bearing without tendon morbidity. In this report, we highlight the design, biomechanics, and surgical technique of Achilles tendon reconstruction with Haglund's deformity using a novel technique that utilizes ultrasonic energy to liquefy the suture anchor, allowing it to incorporate into surrounding bone. Biomechanical studies have demonstrated superior strength of the suture anchor utilizing this novel technique as compared with prior techniques. However, future research is needed to ensure that outcomes of this technique are favorable when compared with outcomes using traditional suture anchoring methods. Level V: Operative technique.
A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.
Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan
2015-07-29
Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.
Bottom-up construction of artificial molecules for superconducting quantum processors
NASA Astrophysics Data System (ADS)
Poletto, Stefano; Rigetti, Chad; Gambetta, Jay M.; Merkel, Seth; Chow, Jerry M.; Corcoles, Antonio D.; Smolin, John A.; Rozen, Jim R.; Keefe, George A.; Rothwell, Mary B.; Ketchen, Mark B.; Steffen, Matthias
2012-02-01
Recent experiments on transmon qubits capacitively coupled to superconducting 3-dimensional cavities have shown coherence times much longer than transmons coupled to more traditional planar resonators. For the implementation of a quantum processor this approach has clear advantages over traditional techniques but it poses the challenge of scalability. We are currently implementing multi-qubits experiments based on a bottom-up scaling approach. First, transmon qubits are fabricated on individual chips and are independently characterized. Second, an artificial molecule is assembled by selecting a particular set of previously characterized single-transmon chips. We present recent data on a two-qubit artificial molecule constructed in this way. The two qubits are chosen to generate a strong Z-Z interaction by matching the 0-1 transition energy of one qubit with the 1-2 transition of the other. Single qubit manipulations and state tomography cannot be done with ``traditional'' single tone microwave pulses but instead specifically shaped pulses have to be simultaneously applied on both qubits. Coherence times, coupling strength, and optimal pulses for decoupling the two qubits and perform state tomography are presented
Ultrasound image edge detection based on a novel multiplicative gradient and Canny operator.
Zheng, Yinfei; Zhou, Yali; Zhou, Hao; Gong, Xiaohong
2015-07-01
To achieve the fast and accurate segmentation of ultrasound image, a novel edge detection method for speckle noised ultrasound images was proposed, which was based on the traditional Canny and a novel multiplicative gradient operator. The proposed technique combines a new multiplicative gradient operator of non-Newtonian type with the traditional Canny operator to generate the initial edge map, which is subsequently optimized by the following edge tracing step. To verify the proposed method, we compared it with several other edge detection methods that had good robustness to noise, with experiments on the simulated and in vivo medical ultrasound image. Experimental results showed that the proposed algorithm has higher speed for real-time processing, and the edge detection accuracy could be 75% or more. Thus, the proposed method is very suitable for fast and accurate edge detection of medical ultrasound images. © The Author(s) 2014.
Research progress on the brewing techniques of new-type rice wine.
Jiao, Aiquan; Xu, Xueming; Jin, Zhengyu
2017-01-15
As a traditional alcoholic beverage, Chinese rice wine (CRW) with high nutritional value and unique flavor has been popular in China for thousands of years. Although traditional production methods had been used without change for centuries, numerous technological innovations in the last decades have greatly impacted on the CRW industry. However, reviews related to the technology research progress in this field are relatively few. This article aimed at providing a brief summary of the recent developments in the new brewing technologies for making CRW. Based on the comparison between the conventional methods and the innovative technologies of CRW brewing, three principal aspects were summarized and sorted, including the innovation of raw material pretreatment, the optimization of fermentation and the reform of sterilization technology. Furthermore, by comparing the advantages and disadvantages of these methods, various issues are addressed related to the prospect of the CRW industry. Copyright © 2016 Elsevier Ltd. All rights reserved.
A review of the use of simulation in dental education.
Perry, Suzanne; Bridges, Susan Margaret; Burrow, Michael Francis
2015-02-01
In line with the advances in technology and communication, medical simulations are being developed to support the acquisition of requisite psychomotor skills before real-life clinical applications. This review article aimed to give a general overview of simulation in a cognate field, clinical dental education. Simulations in dentistry are not a new phenomenon; however, recent developments in virtual-reality technology using computer-generated medical simulations of 3-dimensional images or environments are providing more optimal practice conditions to smooth the transition from the traditional model-based simulation laboratory to the clinic. Evidence as to the positive aspects of virtual reality include increased effectiveness in comparison with traditional simulation teaching techniques, more efficient learning, objective and reproducible feedback, unlimited training hours, and enhanced cost-effectiveness for teaching establishments. Negative aspects have been indicated as initial setup costs, faculty training, and the lack of a variety of content and current educational simulation programs.
Breast Cancer Nodes Detection Using Ultrasonic Microscale Subarrayed MIMO RADAR
Siwamogsatham, Siwaruk; Pomalaza-Ráez, Carlos
2014-01-01
This paper proposes the use of ultrasonic microscale subarrayed MIMO RADARs to estimate the position of breast cancer nodes. The transmit and receive antenna arrays are divided into subarrays. In order to increase the signal diversity each subarray is assigned a different waveform from an orthogonal set. High-frequency ultrasonic transducers are used since a breast is considered to be a superficial structure. Closed form expressions for the optimal Neyman-Pearson detector are derived. The combination of the waveform diversity present in the subarrayed deployment and traditional phased-array RADAR techniques provides promising results. PMID:25309591
Optical and mechanical tolerances in hybrid concentrated thermal-PV solar trough.
Diaz, Liliana Ruiz; Cocilovo, Byron; Miles, Alexander; Pan, Wei; Blanche, Pierre-Alexandre; Norwood, Robert A
2018-05-14
Hybrid thermal-PV solar trough collectors combine concentrated photovoltaics and concentrated solar power technology to harvest and store solar energy. In this work, the optical and mechanical requirements for optimal efficiency are analyzed using non-sequential ray tracing techniques. The results are used to generate opto-mechanical tolerances that can be compared to those of traditional solar collectors. We also explore ideas on how to relieve tracking tolerances for single-axis solar collectors. The objective is to establish a basis for tolerances required for the fabrication and manufacturing of hybrid solar trough collectors.
Management of Cleft Maxillary Hypoplasia with Anterior Maxillary Distraction: Our Experience.
Chacko, Tojan; Vinod, Sankar; Mani, Varghese; George, Arun; Sivaprasad, K K
2014-12-01
Maxillary hypoplasia is a common developmental problem in cleft lip and palate deformities. Since 1970s these deformities have traditionally been corrected by means of orthognathic surgery. Management of skeletal deformities in the maxillofacial region has been an important challenge for maxillofacial surgeons and orthodontists. Distraction osteogenesis is a surgical technique that uses body's own repairing mechanisms for optimal reconstruction of the tissues. We present four cases of anterior maxillary distraction osteogenesis with tooth borne distraction device-Hyrax, which were analyzed retrospectively for the efficacy of the tooth borne device-Hyrax and skeletal stability of distracted anterior maxillary segment.
Heart Sound Biometric System Based on Marginal Spectrum Analysis
Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin
2013-01-01
This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515
Design Oriented Structural Modeling for Airplane Conceptual Design Optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1999-01-01
The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.
Automatically Finding the Control Variables for Complex System Behavior
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2010-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods.
Space Reclamation for Uncoordinated Checkpointing in Message-Passing Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, Yi-Min
1993-01-01
Checkpointing and rollback recovery are techniques that can provide efficient recovery from transient process failures. In a message-passing system, the rollback of a message sender may cause the rollback of the corresponding receiver, and the system needs to roll back to a consistent set of checkpoints called recovery line. If the processes are allowed to take uncoordinated checkpoints, the above rollback propagation may result in the domino effect which prevents recovery line progression. Traditionally, only obsolete checkpoints before the global recovery line can be discarded, and the necessary and sufficient condition for identifying all garbage checkpoints has remained an open problem. A necessary and sufficient condition for achieving optimal garbage collection is derived and it is proved that the number of useful checkpoints is bounded by N(N+1)/2, where N is the number of processes. The approach is based on the maximum-sized antichain model of consistent global checkpoints and the technique of recovery line transformation and decomposition. It is also shown that, for systems requiring message logging to record in-transit messages, the same approach can be used to achieve optimal message log reclamation. As a final topic, a unifying framework is described by considering checkpoint coordination and exploiting piecewise determinism as mechanisms for bounding rollback propagation, and the applicability of the optimal garbage collection algorithm to domino-free recovery protocols is demonstrated.
Heat transfer comparison of nanofluid filled transformer and traditional oil-immersed transformer
NASA Astrophysics Data System (ADS)
Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong
2018-05-01
Dispersing nanoparticles with high thermal conductivity into transformer oil is an innovative approach to improve the thermal performance of traditional oil-immersed transformers. This mixture, also known as nanofluid, has shown the potential in practical application through experimental measurements. This paper presents the comparisons of nanofluid filled transformer and traditional oil-immersed transformer in terms of their computational fluid dynamics (CFD) solutions from the perspective of optimal design. Thermal performance of transformers with the same parameters except coolants is compared. A further comparison on heat transfer then is made after minimizing the oil volume and maximum temperature-rise of these two transformers. Adaptive multi-objective optimization method is employed to tackle this optimization problem.
Westenberger, Benjamin J; Ellison, Christopher D; Fussner, Andrew S; Jenney, Susan; Kolinski, Richard E; Lipe, Terra G; Lyon, Robbe C; Moore, Terry W; Revelle, Larry K; Smith, Anjanette P; Spencer, John A; Story, Kimberly D; Toler, Duckhee Y; Wokovich, Anna M; Buhse, Lucinda F
2005-12-08
This work investigated the use of non-traditional analytical methods to evaluate the quality of a variety of pharmaceutical products purchased via internet sites from foreign sources and compared the results with those obtained from conventional quality assurance methods. Traditional analytical techniques employing HPLC for potency, content uniformity, chromatographic purity and drug release profiles were used to evaluate the quality of five selected drug products (fluoxetine hydrochloride, levothyroxine sodium, metformin hydrochloride, phenytoin sodium, and warfarin sodium). Non-traditional techniques, such as near infrared spectroscopy (NIR), NIR imaging and thermogravimetric analysis (TGA), were employed to verify the results and investigate their potential as alternative testing methods. Two of 20 samples failed USP monographs for quality attributes. The additional analytical methods found 11 of 20 samples had different formulations when compared to the U.S. product. Seven of the 20 samples arrived in questionable containers, and 19 of 20 had incomplete labeling. Only 1 of the 20 samples had final packaging similar to the U.S. products. The non-traditional techniques complemented the traditional techniques used and highlighted additional quality issues for the products tested. For example, these methods detected suspect manufacturing issues (such as blending), which were not evident from traditional testing alone.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Chen, J.
2017-09-01
A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K. L.
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood’s temperature model during transportation, the UAVs’ scheduling and routes’ planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood’s temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance. PMID:27163361
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K L
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood's temperature model during transportation, the UAVs' scheduling and routes' planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood's temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance.
Gehre, Matthias; Renpenning, Julian; Geilmann, Heike; Qi, Haiping; Coplen, Tyler B; Kümmel, Steffen; Ivdra, Natalija; Brand, Willi A; Schimmelmann, Arndt
2017-03-30
Accurate hydrogen isotopic analysis of halogen- and sulfur-bearing organics has not been possible with traditional high-temperature conversion (HTC) because the formation of hydrogen-bearing reaction products other than molecular hydrogen (H 2 ) is responsible for non-quantitative H 2 yields and possible hydrogen isotopic fractionation. Our previously introduced, new chromium-based EA-Cr/HTC-IRMS (Elemental Analyzer-Chromium/High-Temperature Conversion Isotope Ratio Mass Spectrometry) technique focused primarily on nitrogen-bearing compounds. Several technical and analytical issues concerning halogen- and sulfur-bearing samples, however, remained unresolved and required further refinement of the reactor systems. The EA-Cr/HTC reactor was substantially modified for the conversion of halogen- and sulfur-bearing samples. The performance of the novel conversion setup for solid and liquid samples was monitored and optimized using a simultaneously operating dual-detection system of IRMS and ion trap MS. The method with several variants in the reactor, including the addition of manganese metal chips, was evaluated in three laboratories using EA-Cr/HTC-IRMS (on-line method) and compared with traditional uranium-reduction-based conversion combined with manual dual-inlet IRMS analysis (off-line method) in one laboratory. The modified EA-Cr/HTC reactor setup showed an overall H 2 -recovery of more than 96% for all halogen- and sulfur-bearing organic compounds. All results were successfully normalized via two-point calibration with VSMOW-SLAP reference waters. Precise and accurate hydrogen isotopic analysis was achieved for a variety of organics containing F-, Cl-, Br-, I-, and S-bearing heteroelements. The robust nature of the on-line EA-Cr/HTC technique was demonstrated by a series of 196 consecutive measurements with a single reactor filling. The optimized EA-Cr/HTC reactor design can be implemented in existing analytical equipment using commercially available material and is universally applicable for both heteroelement-bearing and heteroelement-free organic-compound classes. The sensitivity and simplicity of the on-line EA-Cr/HTC-IRMS technique provide a much needed tool for routine hydrogen-isotope source tracing of organic contaminants in the environment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Xiao, Jin; Yuan, Jie; Tian, Zhongliang; Yang, Kai; Yao, Zhen; Yu, Bailie; Zhang, Liuyun
2018-01-01
The spent cathode carbon (SCC) from aluminum electrolysis was subjected to caustic leaching to investigate the different effects of ultrasound-assisted and traditional methods on element fluorine (F) leaching rate and leaching residue carbon content. Sodium hydroxide (NaOH) dissolved in deionized water was used as the reaction system. Through single-factor experiments and a comparison of two leaching techniques, the optimum F leaching rate and residue carbon content for ultrasound-assisted leaching process were obtained at a temperature of 70°C, residue time of 40min, initial mass ratio of alkali to SCC (initial alkali-to-material ratio) of 0.6, liquid-to-solid ratio of 10mL/g, and ultrasonic power of 400W, respectively. Under the optimal conditions, the leaching residue carbon content was 94.72%, 2.19% larger than the carbon content of traditional leaching residue. Leaching wastewater was treated with calcium chloride (CaCl 2 ) and bleaching powder and the treated wastewater was recycled caustic solution. All in all, benefiting from advantage of the ultrasonication effects, ultrasound-assisted caustic leaching on spent cathode carbon had 55.6% shorter residue time than the traditional process with a higher impurity removal rate. Copyright © 2017 Elsevier B.V. All rights reserved.
Finite Energy and Bounded Actuator Attacks on Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M
As control system networks are being connected to enterprise level networks for remote monitoring, operation, and system-wide performance optimization, these same connections are providing vulnerabilities that can be exploited by malicious actors for attack, financial gain, and theft of intellectual property. Much effort in cyber-physical system (CPS) protection has focused on protecting the borders of the system through traditional information security techniques. Less effort has been applied to the protection of cyber-physical systems from intelligent attacks launched after an attacker has defeated the information security protections to gain access to the control system. In this paper, attacks on actuator signalsmore » are analyzed from a system theoretic context. The threat surface is classified into finite energy and bounded attacks. These two broad classes encompass a large range of potential attacks. The effect of theses attacks on a linear quadratic (LQ) control are analyzed, and the optimal actuator attacks for both finite and infinite horizon LQ control are derived, therefore the worst case attack signals are obtained. The closed-loop system under the optimal attack signals is given and a numerical example illustrating the effect of an optimal bounded attack is provided.« less
Optimization of Supersonic Transport Trajectories
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
NASA Astrophysics Data System (ADS)
Hueneke, Tilman; Grossmann, Katja; Knecht, Matthias; Raecke, Rasmus; Stutz, Jochen; Werner, Bodo; Pfeilsticker, Klaus
2016-04-01
Changing atmospheric conditions during DOAS measurements from fast moving aircraft platforms pose a challenge for trace gas retrievals. Traditional inversion techniques to retrieve trace gas concentrations from limb scattered UV/vis spectroscopy, like optimal estimation, require a-priori information on Mie extinction (e.g., aerosol concentration and cloud cover) and albedo, which determine the atmospheric radiative transfer. In contrast to satellite applications, cloud filters can not be applied because they would strongly reduce the usable amount of expensively gathered measurement data. In contrast to ground-based MAX-DOAS applications, an aerosol retrieval based on O4 is not able to constrain the radiative transfer in air-borne applications due to the rapidly decreasing amount of O4 with altitude. Furthermore, the assumption of a constant cloud cover is not valid for fast moving aircrafts, thus requiring 2D or even 3D treatment of the radiative transfer. Therefore, traditional techniques are not applicable for most of the data gathered by fast moving aircraft platforms. In order to circumvent these limitations, we have been developing the so-called X-gas scaling method. By utilising a proxy gas X (e.g. O3, O4, …), whose concentration is either a priori known or simultaneously in-situ measured as well as remotely measured, an effective absorption length for the target gas is inferred. In this presentation, we discuss the strengths and weaknesses of the novel approach along with some sample cases. A particular strength of the X-gas scaling method is its insensitivity towards the aerosol abundance and cloud cover as well as wavelength dependent effects, whereas its sensitivity towards the profiles of both gases requires a priori information on their shapes.
NASA Astrophysics Data System (ADS)
Wang, Ziyang; Fiorini, Paolo; Leonov, Vladimir; Van Hoof, Chris
2009-09-01
This paper presents the material characterization methods, characterization results and the optimization scheme for polycrystalline Si70%Ge30% (poly-SiGe) from the perspective of its application in a surface micromachined thermopile. Due to its comparative advantages, such as lower thermal conductivity and ease of processing, over other materials, poly-SiGe is chosen to fabricate a surface micromachined thermopile and eventually a wearable thermoelectric generator (TEG) to be used on a human body. To enable optimal design of advanced thermocouple microstructures, poly-SiGe sample materials prepared by two different techniques, namely low-pressure chemical vapor deposition (LPCVD) with in situ doping and rapid thermal chemical vapor deposition (RTCVD) with ion implantation, have been characterized. Relevant material properties, including electrical resistivity, Seebeck coefficient, thermal conductivity and specific contact resistance, have been reported. For the determination of thermal conductivity, a novel surface-micromachined test structure based on the Seebeck effect is designed, fabricated and measured. Compared to the traditional test structures, it is more advantageous for sample materials with a relatively large Seebeck coefficient, such as poly-SiGe. Based on the characterization results, a further optimization scheme is suggested to allow independent respective optimization of the figure of merit and the specific contact resistance.
A stochastic optimal feedforward and feedback control methodology for superagility
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.
1992-01-01
A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.
Unprotected Left Main Disease: Indications and Optimal Strategies for Percutaneous Intervention.
Li, Jun; Patel, Sandeep M; Parikh, Manish A; Parikh, Sahil A
2016-03-01
Although the incidence of left main (LM) coronary artery disease is relatively low in patients undergoing routine angiography, it is a common presentation in patients with acute coronary syndromes. With the current interventional tools and techniques, percutaneous intervention for LM disease has become a viable alternative to the traditional coronary artery bypass grafting. Factors that contribute to the success and appropriateness of percutaneous intervention for LM disease include coronary anatomy and patient-specific factors such as left ventricular function. Multiple considerations should be taken into account prior to intervention, including hemodynamic support if necessary, intravascular imaging to guide therapy, and stent technique. This review provides an overview of the current body of literature to support the use of percutaneous intervention in LM disease and serves as guideline for the interventionalist approaching LM revascularization.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
The future of human DNA vaccines
Li, Lei; Saade, Fadi; Petrovsky, Nikolai
2012-01-01
DNA vaccines have evolved greatly over the last 20 years since their invention, but have yet to become a competitive alternative to conventional protein or carbohydrate based human vaccines. Whilst safety concerns were an initial barrier, the Achilles heel of DNA vaccines remains their poor immunogenicity when compared to protein vaccines. A wide variety of strategies have been developed to optimize DNA vaccine immunogenicity, including codon optimization, genetic adjuvants, electroporation and sophisticated prime-boost regimens, with each of these methods having its advantages and limitations. Whilst each of these methods has contributed to incremental improvements in DNA vaccine efficacy, more is still needed if human DNA vaccines are to succeed commercially. This review foresees a final breakthrough in human DNA vaccines will come from application of the latest cutting-edge technologies, including “epigenetics” and “omics” approaches, alongside traditional techniques to improve immunogenicity such as adjuvants and electroporation, thereby overcoming the current limitations of DNA vaccines in humans PMID:22981627
Alesso, Magdalena; Escudero, Luis A; Talio, María Carolina; Fernández, Liliana P
2016-11-01
A new simple methodology is proposed for chlorsufuron (CS) traces quantification based upon enhancement of rhodamine B (RhB) fluorescent signal. Experimental variables that influence fluorimetric sensitivity have been studied and optimized. The zeroth order regression calibration was linear from 0.866 to 35.800µgL(-1) CS, with a correlation coefficient of 0.99. At optimal experimental conditions, a limit of detection of 0.259µgL(-1) and a limit of quantification of 0.866µgL(-1) were obtained. The method showed good sensitivity and adequate selectivity and was applied to the determination of trace amounts of CS in plasma, serum and water samples with satisfactory results analyzed by ANOVA test. The proposed methodology represents an alternative to traditional chromatographic techniques for CS monitoring in complex samples, using an accessible instrument in control laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.
Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chunyuan; Stevens, Andrew J.; Chen, Changyou
2016-08-10
Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D &more » 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.« less
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2003-01-01
A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.
NASA Technical Reports Server (NTRS)
Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul
1995-01-01
Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.
Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator
Mohamd Shoukry, Alaa; Gani, Showkat
2017-01-01
Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364
Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.
Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat
2017-01-01
Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.
Advances in the Knowledge about Kidney Decellularization and Repopulation
Destefani, Afrânio Côgo; Sirtoli, Gabriela Modenesi; Nogueira, Breno Valentim
2017-01-01
End-stage renal disease (ESRD) is characterized by the progressive deterioration of renal function that may compromise different tissues and organs. The major treatment indicated for patients with ESRD is kidney transplantation. However, the shortage of available organs, as well as the high rate of organ rejection, supports the need for new therapies. Thus, the implementation of tissue bioengineering to organ regeneration has emerged as an alternative to traditional organ transplantation. Decellularization of organs with chemical, physical, and/or biological agents generates natural scaffolds, which can serve as basis for tissue reconstruction. The recellularization of these scaffolds with different cell sources, such as stem cells or adult differentiated cells, can provide an organ with functionality and no immune response after in vivo transplantation on the host. Several studies have focused on improving these techniques, but until now, there is no optimal decellularization method for the kidney available yet. Herein, an overview of the current literature for kidney decellularization and whole-organ recellularization is presented, addressing the pros and cons of the actual techniques already developed, the methods adopted to evaluate the efficacy of the procedures, and the challenges to be overcome in order to achieve an optimal protocol. PMID:28620603
NASA Astrophysics Data System (ADS)
Alhalaili, Badriyah; Dryden, Daniel M.; Vidu, Ruxandra; Ghandiparsi, Soroush; Cansizoglu, Hilal; Gao, Yang; Saif Islam, M.
2018-03-01
Photo-electrochemical (PEC) etching can produce high-aspect ratio features, such as pillars and holes, with high anisotropy and selectivity, while avoiding the surface and sidewall damage caused by traditional deep reactive ion etching (DRIE) or inductively coupled plasma (ICP) RIE. Plasma-based techniques lead to the formation of dangling bonds, surface traps, carrier leakage paths, and recombination centers. In pursuit of effective PEC etching, we demonstrate an optical system using long wavelength (λ = 975 nm) infra-red (IR) illumination from a high-power laser (1-10 W) to control the PEC etching process in n-type silicon. The silicon wafer surface was patterned with notches through a lithography process and KOH etching. Then, PEC etching was introduced by illuminating the backside of the silicon wafer to enhance depth, resulting in high-aspect ratio structures. The effect of the PEC etching process was optimized by varying light intensities and electrolyte concentrations. This work was focused on determining and optimizing this PEC etching technique on silicon, with the goal of expanding the method to a variety of materials including GaN and SiC that are used in designing optoelectronic and electronic devices, sensors and energy harvesting devices.
KNMI DataLab experiences in serving data-driven innovations
NASA Astrophysics Data System (ADS)
Noteboom, Jan Willem; Sluiter, Raymond
2016-04-01
Climate change research and innovations in weather forecasting rely more and more on (Big) data. Besides increasing data from traditional sources (such as observation networks, radars and satellites), the use of open data, crowd sourced data and the Internet of Things (IoT) is emerging. To deploy these sources of data optimally in our services and products, KNMI has established a DataLab to serve data-driven innovations in collaboration with public and private sector partners. Big data management, data integration, data analytics including machine learning and data visualization techniques are playing an important role in the DataLab. Cross-domain data-driven innovations that arise from public-private collaborative projects and research programmes can be explored, experimented and/or piloted by the KNMI DataLab. Furthermore, advice can be requested on (Big) data techniques and data sources. In support of collaborative (Big) data science activities, scalable environments are offered with facilities for data integration, data analysis and visualization. In addition, Data Science expertise is provided directly or from a pool of internal and external experts. At the EGU conference, gained experiences and best practices are presented in operating the KNMI DataLab to serve data-driven innovations for weather and climate applications optimally.
A study of optimization techniques in HDR brachytherapy for the prostate
NASA Astrophysics Data System (ADS)
Pokharel, Ghana Shyam
Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.
Wu, Changzheng; Zhang, Feng; Li, Lijun; Jiang, Zhedong; Ni, Hui; Xiao, Anfeng
2018-01-01
High amounts of insoluble substrates exist in the traditional solid-state fermentation (SSF) system. The presence of these substrates complicates the determination of microbial biomass. Thus, enzyme activity is used as the sole index for the optimization of the traditional SSF system, and the relationship between microbial growth and enzyme synthesis is always ignored. This study was conducted to address this deficiency. All soluble nutrients from tea stalk were extracted using water. The aqueous extract was then mixed with polyurethane sponge to establish a modified SSF system, which was then used to conduct tannase production. With this system, biomass, enzyme activity, and enzyme productivity could be measured rationally and accurately. Thus, the association between biomass and enzyme activity could be easily identified, and the shortcomings of traditional SSF could be addressed. Different carbon and nitrogen sources exerted different effects on microbial growth and enzyme production. Single-factor experiments showed that glucose and yeast extract greatly improved microbial biomass accumulation and that tannin and (NH 4 ) 2 SO 4 efficiently promoted enzyme productivity. Then, these four factors were optimized through response surface methodology. Tannase activity reached 19.22 U/gds when the added amounts of tannin, glucose, (NH 4 ) 2 SO 4 , and yeast extract were 7.49, 8.11, 9.26, and 2.25%, respectively. Tannase activity under the optimized process conditions was 6.36 times higher than that under the initial process conditions. The optimized parameters were directly applied to the traditional tea stalk SSF system. Tannase activity reached 245 U/gds, which is 2.9 times higher than our previously reported value. In this study, a modified SSF system was established to address the shortcomings of the traditional SSF system. Analysis revealed that enzymatic activity and microbial biomass are closely related, and different carbon and nitrogen sources have different effects on microbial growth and enzyme production. The maximal tannase activity was obtained under the optimal combination of nutrient sources that enhances cell growth and tannase accumulation. Moreover, tannase production through the traditional tea stalk SSF was markedly improved when the optimized parameters were applied. This work provides an innovative approach to bioproduction research through SSF.
Asymmetric Dual-Band Tracking Technique for Optimal Joint Processing of BDS B1I and B1C Signals
Wang, Chuhan; Cui, Xiaowei; Ma, Tianyi; Lu, Mingquan
2017-01-01
Along with the rapid development of the Global Navigation Satellite System (GNSS), satellite navigation signals have become more diversified, complex, and agile in adapting to increasing market demands. Various techniques have been developed for processing multiple navigation signals to achieve better performance in terms of accuracy, sensitivity, and robustness. This paper focuses on a technique for processing two signals with separate but adjacent center frequencies, such as B1I and B1C signals in the BeiDou global system. The two signals may differ in modulation scheme, power, and initial phase relation and can be processed independently by user receivers; however, the propagation delays of the two signals from a satellite are nearly identical as they are modulated on adjacent frequencies, share the same reference clock, and undergo nearly identical propagation paths to the receiver, resulting in strong coherence between the two signals. Joint processing of these signals can achieve optimal measurement performance due to the increased Gabor bandwidth and power. In this paper, we propose a universal scheme of asymmetric dual-band tracking (ASYM-DBT) to take advantage of the strong coherence, the increased Gabor bandwidth, and power of the two signals in achieving much-reduced thermal noise and more accurate ranging results when compared with the traditional single-band algorithm. PMID:29035350
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
Asymmetric Dual-Band Tracking Technique for Optimal Joint Processing of BDS B1I and B1C Signals.
Wang, Chuhan; Cui, Xiaowei; Ma, Tianyi; Zhao, Sihao; Lu, Mingquan
2017-10-16
Along with the rapid development of the Global Navigation Satellite System (GNSS), satellite navigation signals have become more diversified, complex, and agile in adapting to increasing market demands. Various techniques have been developed for processing multiple navigation signals to achieve better performance in terms of accuracy, sensitivity, and robustness. This paper focuses on a technique for processing two signals with separate but adjacent center frequencies, such as B1I and B1C signals in the BeiDou global system. The two signals may differ in modulation scheme, power, and initial phase relation and can be processed independently by user receivers; however, the propagation delays of the two signals from a satellite are nearly identical as they are modulated on adjacent frequencies, share the same reference clock, and undergo nearly identical propagation paths to the receiver, resulting in strong coherence between the two signals. Joint processing of these signals can achieve optimal measurement performance due to the increased Gabor bandwidth and power. In this paper, we propose a universal scheme of asymmetric dual-band tracking (ASYM-DBT) to take advantage of the strong coherence, the increased Gabor bandwidth, and power of the two signals in achieving much-reduced thermal noise and more accurate ranging results when compared with the traditional single-band algorithm.
Nguyen, Hung X; Kirkton, Robert D; Bursac, Nenad
2018-05-01
We describe a two-stage protocol to generate electrically excitable and actively conducting cell networks with stable and customizable electrophysiological phenotypes. Using this method, we have engineered monoclonally derived excitable tissues as a robust and reproducible platform to investigate how specific ion channels and mutations affect action potential (AP) shape and conduction. In the first stage of the protocol, we combine computational modeling, site-directed mutagenesis, and electrophysiological techniques to derive optimal sets of mammalian and/or prokaryotic ion channels that produce specific AP shape and conduction characteristics. In the second stage of the protocol, selected ion channels are stably expressed in unexcitable human cells by means of viral or nonviral delivery, followed by flow cytometry or antibiotic selection to purify the desired phenotype. This protocol can be used with traditional heterologous expression systems or primary excitable cells, and application of this method to primary fibroblasts may enable an alternative approach to cardiac cell therapy. Compared with existing methods, this protocol generates a well-defined, relatively homogeneous electrophysiological phenotype of excitable cells that facilitates experimental and computational studies of AP conduction and can decrease arrhythmogenic risk upon cell transplantation. Although basic cell culture and molecular biology techniques are sufficient to generate excitable tissues using the described protocol, experience with patch-clamp techniques is required to characterize and optimize derived cell populations.
Continuous welding of unidirectional fiber reinforced thermoplastic tape material
NASA Astrophysics Data System (ADS)
Schledjewski, Ralf
2017-10-01
Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.
High-speed transport-of-intensity phase microscopy with an electrically tunable lens.
Zuo, Chao; Chen, Qian; Qu, Weijuan; Asundi, Anand
2013-10-07
We present a high-speed transport-of-intensity equation (TIE) quantitative phase microscopy technique, named TL-TIE, by combining an electrically tunable lens with a conventional transmission microscope. This permits the specimen at different focus position to be imaged in rapid succession, with constant magnification and no physically moving parts. The simplified image stack collection significantly reduces the acquisition time, allows for the diffraction-limited through-focus intensity stack collection at 15 frames per second, making dynamic TIE phase imaging possible. The technique is demonstrated by profiling of microlens array using optimal frequency selection scheme, and time-lapse imaging of live breast cancer cells by inversion the defocused phase optical transfer function to correct the phase blurring in traditional TIE. Experimental results illustrate its outstanding capability of the technique for quantitative phase imaging, through a simple, non-interferometric, high-speed, high-resolution, and unwrapping-free approach with prosperous applications in micro-optics, life sciences and bio-photonics.
NASA Astrophysics Data System (ADS)
Ciaramello, Francis M.; Hemami, Sheila S.
2007-02-01
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik
1996-01-01
For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.
On the Optimization of Aerospace Plane Ascent Trajectory
NASA Astrophysics Data System (ADS)
Al-Garni, Ahmed; Kassem, Ayman Hamdy
A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.
Concentrating phenolic acids from Lonicera japonica by nanofiltration technology
NASA Astrophysics Data System (ADS)
Li, Cunyu; Ma, Yun; Li, Hongyang; Peng, Guoping
2017-03-01
Response surface analysis methodology was used to optimize the concentrate process of phenolic acids from Lonicera japonica by nanofiltration technique. On the basis of the influences of pressure, temperature and circulating volume, the retention rate of neochlorogenic acid, chlorogenic acid and 4-dicaffeoylquinic acid were selected as index, molecular weight cut-off of nanofiltration membrane, concentration and pH were selected as influencing factors during concentrate process. The experiment mathematical model was arranged according to Box-Behnken central composite experiment design. The optimal concentrate conditions were as following: nanofiltration molecular weight cut-off, 150 Da; solutes concentration, 18.34 µg/mL; pH, 4.26. The predicted value of retention rate was 97.99% under the optimum conditions, and the experimental value was 98.03±0.24%, which was in accordance with the predicted value. These results demonstrate that the combination of Box-Behnken design and response surface analysis can well optimize the concentrate process of Lonicera japonica water-extraction by nanofiltration, and the results provide the basis for nanofiltration concentrate for heat-sensitive traditional Chinese medicine.
Van Audenhaege, Karen; Van Holen, Roel; Vandenberghe, Stefaan; Vanhove, Christian; Metzler, Scott D.; Moore, Stephen C.
2015-01-01
In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications. PMID:26233207
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach
Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz
2017-01-01
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.
Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz
2017-05-19
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.
2014-01-01
Background Heterologous gene expression is an important tool for synthetic biology that enables metabolic engineering and the production of non-natural biologics in a variety of host organisms. The translational efficiency of heterologous genes can often be improved by optimizing synonymous codon usage to better match the host organism. However, traditional approaches for optimization neglect to take into account many factors known to influence synonymous codon distributions. Results Here we define an alternative approach for codon optimization that utilizes systems level information and codon context for the condition under which heterologous genes are being expressed. Furthermore, we utilize a probabilistic algorithm to generate multiple variants of a given gene. We demonstrate improved translational efficiency using this condition-specific codon optimization approach with two heterologous genes, the fluorescent protein-encoding eGFP and the catechol 1,2-dioxygenase gene CatA, expressed in S. cerevisiae. For the latter case, optimization for stationary phase production resulted in nearly 2.9-fold improvements over commercial gene optimization algorithms. Conclusions Codon optimization is now often a standard tool for protein expression, and while a variety of tools and approaches have been developed, they do not guarantee improved performance for all hosts of applications. Here, we suggest an alternative method for condition-specific codon optimization and demonstrate its utility in Saccharomyces cerevisiae as a proof of concept. However, this technique should be applicable to any organism for which gene expression data can be generated and is thus of potential interest for a variety of applications in metabolic and cellular engineering. PMID:24636000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Alarcón, J A; Immink, M D; Méndez, L F
1989-12-01
The present study was conducted as part of an evaluation of the economic and nutritional effects of a crop diversification program for small-scale farmers in the Western highlands of Guatemala. Linear programming models are employed in order to obtain optimal combinations of traditional and non-traditional food crops under different ecological conditions that: a) provide minimum cost diets for auto-consumption, and b) maximize net income and market availability of dietary energy. Data used were generated by means of an agroeconomic survey conducted in 1983 among 726 farming households. Food prices were obtained from the Institute of Agrarian Marketing; data on production costs, from the National Bank of Agricultural Development in Guatemala. The gestation periods for each crop were obtained from three different sources, and then averaged. The results indicated that the optimal cropping pattern for the minimum-cost diets for auto consumption include traditional foods (corn, beans, broad bean, wheat, potato), non-traditional foods (carrots, broccoli, beets) and foods of animal origin (milk, eggs). A significant number of farmers included in the sample did not have sufficient land availability to produce all foods included in the minimum-cost diet. Cropping patterns which maximize net incomes include only non-traditional foods: onions, carrots, broccoli and beets for farmers in the low highland areas, and raddish, broccoli, cauliflower and carrots for farmers in the higher parts. Optimal cropping patterns which maximize market availability of dietary energy include traditional and non-traditional foods; for farmers in the lower areas: wheat, corn, beets, carrots and onions; for farmers in the higher areas: potato, wheat, raddish, carrots and cabbage.
Mai, Lan-Yin; Li, Yi-Xuan; Chen, Yong; Xie, Zhen; Li, Jie; Zhong, Ming-Yu
2014-05-01
The compatibility of traditional Chinese medicines (TCMs) formulae containing enormous information, is a complex component system. Applications of mathematical statistics methods on the compatibility researches of traditional Chinese medicines formulae have great significance for promoting the modernization of traditional Chinese medicines and improving clinical efficacies and optimizations of formulae. As a tool for quantitative analysis, data inference and exploring inherent rules of substances, the mathematical statistics method can be used to reveal the working mechanisms of the compatibility of traditional Chinese medicines formulae in qualitatively and quantitatively. By reviewing studies based on the applications of mathematical statistics methods, this paper were summarized from perspective of dosages optimization, efficacies and changes of chemical components as well as the rules of incompatibility and contraindication of formulae, will provide the references for further studying and revealing the working mechanisms and the connotations of traditional Chinese medicines.
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Khusainov, T. A.; Shalashov, A. G.; Gospodchikov, E. D.
2018-05-01
The field structure of quasi-optical wave beams tunneled through the evanescence region in the vicinity of the plasma cutoff in a nonuniform magnetoactive plasma is analyzed. This problem is traditionally associated with the process of linear transformation of ordinary and extraordinary waves. An approximate analytical solution is constructed for a rather general magnetic configuration applicable to spherical tokamaks, optimized stellarators, and other magnetic confinement systems with a constant plasma density on magnetic surfaces. A general technique for calculating the transformation coefficient of a finite-aperture wave beam is proposed, and the physical conditions required for the most efficient transformation are analyzed.
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
Surface modification of cellulose using silane coupling agent.
Thakur, Manju Kumari; Gupta, Raju Kumar; Thakur, Vijay Kumar
2014-10-13
Recently there has been a growing interest in substituting traditional synthetic polymers with natural polymers for different applications. However, natural polymers such as cellulose suffer from few drawbacks. To become viable potential alternatives of synthetic polymers, cellulosic polymers must have comparable physico-chemical properties to that of synthetic polymers. So in the present work, cellulose polymer has been modified by a series of mercerization and silane functionalization to optimize the reaction conditions. Structural, thermal and morphological characterization of the cellulose has been done using FTIR, TGA and SEM, techniques. Surface modified cellulose polymers were further subjected to evaluation of their properties like swelling and chemical resistance behavior. Published by Elsevier Ltd.
Optimization under uncertainty of parallel nonlinear energy sinks
NASA Astrophysics Data System (ADS)
Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe
2017-04-01
Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
Using Mouse Mammary Tumor Cells to Teach Core Biology Concepts: A Simple Lab Module.
McIlrath, Victoria; Trye, Alice; Aguanno, Ann
2015-06-18
Undergraduate biology students are required to learn, understand and apply a variety of cellular and molecular biology concepts and techniques in preparation for biomedical, graduate and professional programs or careers in science. To address this, a simple laboratory module was devised to teach the concepts of cell division, cellular communication and cancer through the application of animal cell culture techniques. Here the mouse mammary tumor (MMT) cell line is used to model for breast cancer. Students learn to grow and characterize these animal cells in culture and test the effects of traditional and non-traditional chemotherapy agents on cell proliferation. Specifically, students determine the optimal cell concentration for plating and growing cells, learn how to prepare and dilute drug solutions, identify the best dosage and treatment time course of the antiproliferative agents, and ascertain the rate of cell death in response to various treatments. The module employs both a standard cell counting technique using a hemocytometer and a novel cell counting method using microscopy software. The experimental procedure lends to open-ended inquiry as students can modify critical steps of the protocol, including testing homeopathic agents and over-the-counter drugs. In short, this lab module requires students to use the scientific process to apply their knowledge of the cell cycle, cellular signaling pathways, cancer and modes of treatment, all while developing an array of laboratory skills including cell culture and analysis of experimental data not routinely taught in the undergraduate classroom.
El Kadi, Youssef Ait; Moudden, Ali; Faiz, Bouazza; Maze, Gerard; Decultot, Dominique
2013-01-01
Fish quality is traditionally controlled by chemical and microbiological analysis. The non-destructive control presents an enormous professional interest thanks to the technical contribution and precision of the analysis to which it leads. This paper presents the results obtained from a characterisation of fish thaw-ing process by the ultrasonic technique, with monitoring thermal processing from frozen to defrosted states. The study was carried out on fish type red drum and salmon cut into fillets of 15 mm thickness. After being frozen at -20°C, the sample is enclosed in a plexiglas vessel with parallel walls at the ambient temperature 30°C and excited in perpendicular incidence at 0.5 MHz by an ultrasonic pulser-receiver Sofranel 5052PR. the technique of measurement consists to study the signals reflected by fish during its thawing, the specific techniques of signal processing are implemented to deduce informations characterizing the state of fish and its thawing process by examining the evolution of the position echoes reflected by the sample and the viscoelastic parameters of fish during its thawing. The obtained results show a relationship between the thermal state of fish and its acoustic properties, which allowed to deduce the optimal time of the first thawing in order to restrict the growth of microbial flora. For salmon, the results show a decrease of 36% of the time of the second thawing and an increase of 10.88% of the phase velocity, with a decrease of 65.5% of the peak-to-peak voltage of the signal reflected, thus a decrease of the acoustic impedance. This study shows an optimal time and an evolution rate of thawing specific to each type offish and a correlation between the acoustic behavior of fish and its thermal state which approves that this technique of ultrasonic monitoring can substitute the control using the destructive chemical analysis in order to monitor the thawing process and to know whether a fish has suffered an accidental thawing.
On the Impact of Execution Models: A Case Study in Computational Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram
2015-05-25
Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less
Study of supersonic plasma technology jets
NASA Astrophysics Data System (ADS)
Selezneva, Svetlana; Gravelle, Denis; Boulos, Maher; van de Sanden, Richard; Schram, Dc
2001-10-01
Recently some new techniques using remote thermal plasma for thin film deposition and plasma chemistry processes were developed. These techniques include PECVD of diamonds, diamond-like and polymer films; a-C:H and a-Si:H films. The latter are of especial interest because of their applications for solar cell production industry. In remote plasma deposition, thermal plasma is formed by means of one of traditional plasma sources. The chamber pressure is reduced with the help of continuous pumping. In that way the flow is accelerated up to the supersonic speed. The plasma expansion is controlled using a specific torch nozzle design. To optimize the deposition process detailed knowledge about the gas dynamic structure of the jet and chemical kinetics mechanisms is required. In the paper, we show how the flow pattern and the character of the deviations from local thermodynamic equilibrium differs in plasmas generated by different plasma sources, such as induction plasma torch, traditional direct current arc and cascaded arc. We study the effects of the chamber pressure, nozzle design and carrier gas on the resulting plasma properties. The analysis is performed by means of numerical modeling using commercially available FLUENT program with incorporated user-defined subroutines for two-temperature model. The results of continuum mechanics approach are compared with that of the kinetic Monte Carlo method and with the experimental data.
Explosive component acceptance tester using laser interferometer technology
NASA Technical Reports Server (NTRS)
Wickstrom, Richard D.; Tarbell, William W.
1993-01-01
Acceptance testing of explosive components requires a reliable and simple to use testing method that can discern less than optimal performance. For hot-wire detonators, traditional techniques use dent blocks or photographic diagnostic methods. More complicated approaches are avoided because of their inherent problems with setup and maintenance. A recently developed tester is based on using a laser interferometer to measure the velocity of flying plates accelerated by explosively actuated detonators. Unlike ordinary interferometers that monitor displacement of the test article, this device measures velocity directly and is commonly used with non-spectral surfaces. Most often referred to as the VISAR technique (Velocity Interferometer System for Any Reflecting Surface), it has become the most widely-accepted choice for accurate measurement of velocity in the range greater than 1 mm/micro-s. Traditional VISAR devices require extensive setup and adjustment and therefore are unacceptable in a production-testing environment. This paper describes a new VISAR approach which requires virtually no adjustments, yet provides data with accuracy comparable to the more complicated systems. The device, termed the Fixed-Cavity VISAR, is currently being developed to serve as a product verification tool for hot-wire detonators and slappers. An extensive data acquisition and analysis computer code was also created to automate the manipulation of raw data into final results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisner, R.; Melin, A.; Burress, T.
The objective of this project is to demonstrate improved reliability and increased performance made possible by deeply embedding instrumentation and controls (I&C) in nuclear power plant (NPP) components and systems. The project is employing a highly instrumented canned rotor, magnetic bearing, fluoride salt pump as its I&C technology demonstration platform. I&C is intimately part of the basic millisecond-by-millisecond functioning of the system; treating I&C as an integral part of the system design is innovative and will allow significant improvement in capabilities and performance. As systems become more complex and greater performance is required, traditional I&C design techniques become inadequate andmore » more advanced I&C needs to be applied. New I&C techniques enable optimal and reliable performance and tolerance of noise and uncertainties in the system rather than merely monitoring quasistable performance. Traditionally, I&C has been incorporated in NPP components after the design is nearly complete; adequate performance was obtained through over-design. By incorporating I&C at the beginning of the design phase, the control system can provide superior performance and reliability and enable designs that are otherwise impossible. This report describes the progress and status of the project and provides a conceptual design overview for the platform to demonstrate the performance and reliability improvements enabled by advanced embedded I&C.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mossine, Andrew V.; Brooks, Allen F.; Ichiishi, Naoko
In a relatively short period of time, transition metal-mediated radiofluorination reactions have changed the PET radiochemistry landscape. These reactions have enabled the radiofluorination of a wide range of substrates, facilitating access to radiopharmaceuticals that were challenging to synthesize using traditional fluorine-18 radiochemistry. However, the process of adapting these new reactions for automated radiopharmaceutical production has revealed limitations in fitting them into the confines of traditional radiochemistry systems. In particular, the presence of bases (e.g. K 2CO 3) and/or phase transfer catalysts (PTC) (e.g. kryptofix 2.2.2) associated with fluorine-18 preparation has been found to be detrimental to reaction yields. We hypothesizedmore » that these limitations could be addressed through the development of alternate techniques for preparing [18F]fluoride. This approach also opens the possibility that an eluent can be individually tailored to meet the specific needs of a metal-catalyzed reaction of interest. In this communication, we demonstrate that various solutions of copper salts, bases, and ancillary ligands can be utilized to elute [ 18F]fluoride from ion exchange cartridges. The new procedures we present here are effective for fluorine-18 radiochemistry and, as proof of concept, have been used to optimize an otherwise base-sensitive copper-mediated radiofluorination reaction.« less
Mossine, Andrew V.; Brooks, Allen F.; Ichiishi, Naoko; ...
2017-03-22
In a relatively short period of time, transition metal-mediated radiofluorination reactions have changed the PET radiochemistry landscape. These reactions have enabled the radiofluorination of a wide range of substrates, facilitating access to radiopharmaceuticals that were challenging to synthesize using traditional fluorine-18 radiochemistry. However, the process of adapting these new reactions for automated radiopharmaceutical production has revealed limitations in fitting them into the confines of traditional radiochemistry systems. In particular, the presence of bases (e.g. K 2CO 3) and/or phase transfer catalysts (PTC) (e.g. kryptofix 2.2.2) associated with fluorine-18 preparation has been found to be detrimental to reaction yields. We hypothesizedmore » that these limitations could be addressed through the development of alternate techniques for preparing [18F]fluoride. This approach also opens the possibility that an eluent can be individually tailored to meet the specific needs of a metal-catalyzed reaction of interest. In this communication, we demonstrate that various solutions of copper salts, bases, and ancillary ligands can be utilized to elute [ 18F]fluoride from ion exchange cartridges. The new procedures we present here are effective for fluorine-18 radiochemistry and, as proof of concept, have been used to optimize an otherwise base-sensitive copper-mediated radiofluorination reaction.« less
Ambient versus traditional environment in pediatric emergency department.
Robinson, Patricia S; Green, Jeanette
2015-01-01
We sought to examine the effect of exposure to an ambient environment in a pediatric emergency department. We hypothesized that passive distraction from ambient lighting in an emergency department would lead to reduction in patient pain and anxiety and increased caregiver satisfaction with services. Passive distraction has been associated with lower anxiety and pain in patients and affects perception of wait time. A pediatric ED was designed that optimized passive distraction techniques using colorful ambient lighting. Participants were nonrandomly assigned to either an ambient ED environment or a traditional ED environment. Entry and exit questionnaires assessed caregiver expectations and experiences. Pain ratings were obtained with age-appropriate scales, and wait times were recorded. A total of 70 participants were assessed across conditions, that is, 40 in the ambient ED group and 30 in the traditional ED group. Caregivers in the traditional ED group expected a longer wait, had higher anxiety pretreatment, and felt more scared than those in the ambient ED group. Caregivers in the ambient ED group felt more included in the care of their child and rated quality of care higher than caregivers in the traditional ED group. Pain ratings and administrations of pain medication were lower in the ambient ED group. Mean scores for the ambient ED group were in the expected direction on several items measuring satisfaction with ED experiences. Results were suggestive of less stress in caregivers, less pain in patients, and higher satisfaction levels in the ambient ED group. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Li, Xiang
2016-10-01
Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.
Customizable cap implants for neurophysiological experimentation.
Blonde, Jackson D; Roussy, Megan; Luna, Rogelio; Mahmoudian, Borna; Gulli, Roberto A; Barker, Kevin C; Lau, Jonathan C; Martinez-Trujillo, Julio C
2018-04-22
Several primate neurophysiology laboratories have adopted acrylic-free, custom-fit cranial implants. These implants are often comprised of titanium or plastic polymers, such as polyether ether ketone (PEEK). Titanium is favored for its mechanical strength and osseointegrative properties whereas PEEK is notable for its lightweight, machinability, and MRI compatibility. Recent titanium/PEEK implants have proven to be effective in minimizing infection and implant failure, thereby prolonging experiments and optimizing the scientific contribution of a single primate. We created novel, customizable PEEK 'cap' implants that contour to the primate's skull. The implants were created using MRI and/or CT data, SolidWorks software and CNC-machining. Three rhesus macaques were implanted with a PEEK cap implant. Head fixation and chronic recordings were successfully performed. Improvements in design and surgical technique solved issues of granulation tissue formation and headpost screw breakage. Primate cranial implants have traditionally been fastened to the skull using acrylic and anchor screws. This technique is prone to skin recession, infection, and implant failure. More recent methods have used imaging data to create custom-fit titanium/PEEK implants with radially extending feet or vertical columns. Compared to our design, these implants are more surgically invasive over time, have less force distribution, and/or do not optimize the utilizable surface area of the skull. Our PEEK cap implants served as an effective and affordable means to perform electrophysiological experimentation while reducing surgical invasiveness, providing increased strength, and optimizing useful surface area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Genetic Adaptive Control for PZT Actuators
NASA Technical Reports Server (NTRS)
Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.
1995-01-01
A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.
Vectorization with SIMD extensions speeds up reconstruction in electron tomography.
Agulleiro, J I; Garzón, E M; García, I; Fernández, J J
2010-06-01
Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.
NASA Astrophysics Data System (ADS)
Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael
2016-08-01
Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.
Enhanced Higgs boson to τ(+)τ(-) search with deep learning.
Baldi, P; Sadowski, P; Whiteson, D
2015-03-20
The Higgs boson is thought to provide the interaction that imparts mass to the fundamental fermions, but while measurements at the Large Hadron Collider (LHC) are consistent with this hypothesis, current analysis techniques lack the statistical power to cross the traditional 5σ significance barrier without more data. Deep learning techniques have the potential to increase the statistical power of this analysis by automatically learning complex, high-level data representations. In this work, deep neural networks are used to detect the decay of the Higgs boson to a pair of tau leptons. A Bayesian optimization algorithm is used to tune the network architecture and training algorithm hyperparameters, resulting in a deep network of eight nonlinear processing layers that improves upon the performance of shallow classifiers even without the use of features specifically engineered by physicists for this application. The improvement in discovery significance is equivalent to an increase in the accumulated data set of 25%.
Nanoscale surface characterization using laser interference microscopy
NASA Astrophysics Data System (ADS)
Ignatyev, Pavel S.; Skrynnik, Andrey A.; Melnik, Yury A.
2018-03-01
Nanoscale surface characterization is one of the most significant parts of modern materials development and application. The modern microscopes are expensive and complicated tools, and its use for industrial tasks is limited due to laborious sample preparation, measurement procedures, and low operation speed. The laser modulation interference microscopy method (MIM) for real-time quantitative and qualitative analysis of glass, metals, ceramics, and various coatings has a spatial resolution of 0.1 nm for vertical and up to 100 nm for lateral. It is proposed as an alternative to traditional scanning electron microscopy (SEM) and atomic force microscopy (AFM) methods. It is demonstrated that in the cases of roughness metrology for super smooth (Ra >1 nm) surfaces the application of a laser interference microscopy techniques is more optimal than conventional SEM and AFM. The comparison of semiconductor test structure for lateral dimensions measurements obtained with SEM and AFM and white light interferometer also demonstrates the advantages of MIM technique.
Riley, Thomas C; Mafi, Reza; Mafi, Pouya; Khan, Wasim S
2018-02-23
The incidence of knee ligament injury is increasing and represents a significant cost to healthcare providers. Current interventions include tissue grafts, suture repair and non-surgical management. These techniques have demonstrated good patient outcomes but have been associated graft rejection, infection, long term immobilization and reduced joint function. The limitations of traditional management strategies have prompted research into tissue engineering of knee ligaments. This paper aims to evaluate whether tissue engineering of knee ligaments offers a viable alternative in the clinical management of knee ligament injuries. A search of existing literature was performed using OVID Medline, Embase, AMED, PubMed and Google Scholar, and a manual review of citations identified within these papers. Silk, polymer and extracellular matrix based scaffolds can all improve graft healing and collagen production. Fibroblasts and stem cells demonstrate compatibility with scaffolds, and have been shown to increase organized collagen production. These effects can be augmented using growth factors and extracellular matrix derivatives. Animal studies have shown tissue engineered ligaments can provide the biomechanical characteristics required for effective treatment of knee ligament injuries. There is a growing clinical demand for a tissue engineered alternative to traditional management strategies. Currently, there is limited consensus regarding material selection for use in tissue engineered ligaments. Further research is required to optimize tissue engineered ligament production before clinical application. Controlled clinical trials comparing the use of tissue engineered ligaments and traditional management in patients with knee ligament injury could determine whether they can provide a cost-effective alternative. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
How to maximize science communication efficacy by combining old and new media
NASA Astrophysics Data System (ADS)
Nuccitelli, D. A.; Cook, J.
2014-12-01
Traditional science communication approaches (such as relying on university press releases about new scientific publications), and new communication approaches (such as utilizing infographics and social media), can each reach a wide audience when successful. However, probability of successful the science communication can be amplified by taking advantage of both traditional and new media, especially when 'sticky' messaging techniques are applied. The example of Cook et al., 2013 (C13), which found a 97% consensus in the peer-reviewed climate literature on human-caused global warming, is considered. C13 implemented this optimal combined communications approach strategy and became the most-downloaded study in all Institute of Physics journals, with over 200,000 downloads to date. Due to the effective 'sticky' messaging approaches implemented by the study authors, its results received broad coverage from international media and reached millions of people via social media. Strategies to avoid misrepresentations of one's work while maximizing the communications efficacy of its key points will also be discussed.
DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.
Gurney, Jud W
2002-10-01
Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.
Baritugo, Kei-Anne; Kim, Hee Taek; David, Yokimiko; Choi, Jong-Il; Hong, Soon Ho; Jeong, Ki Jun; Choi, Jong Hyun; Joo, Jeong Chan; Park, Si Jae
2018-05-01
Bio-based production of industrially important chemicals provides an eco-friendly alternative to current petrochemical-based processes. Because of the limited supply of fossil fuel reserves, various technologies utilizing microbial host strains for the sustainable production of platform chemicals from renewable biomass have been developed. Corynebacterium glutamicum is a non-pathogenic industrial microbial species traditionally used for L-glutamate and L-lysine production. It is a promising species for industrial production of bio-based chemicals because of its flexible metabolism that allows the utilization of a broad spectrum of carbon sources and the production of various amino acids. Classical breeding, systems, synthetic biology, and metabolic engineering approaches have been used to improve its applications, ranging from traditional amino-acid production to modern biorefinery systems for production of value-added platform chemicals. This review describes recent advances in the development of genetic engineering tools and techniques for the establishment and optimization of metabolic pathways for bio-based production of major C2-C6 platform chemicals using recombinant C. glutamicum.
Old patterns, new meaning: the 1845 hospital of Bezm-i Alem in Istanbul.
Shefer, Miri
2005-01-01
This paper discusses the history of an 1845 Ottoman hospital founded by Bezm-i Alem, mother of the reigning sultan Abdülmecit I (reigned 1839-1856), embedded in the medical and political contexts of the Middle East in the nineteenth century. The main focus of this paper is the Ottoman discourse of modernization, which identified progress with modernization and westernization and induced a belief in the positive character of progress, with a high degree of optimism regarding the success of the process. The Bezm-i Alem hospital illustrates the medical reality of the 19th century, reconstructed through Ottoman eyes rather than from the perspective of foreigners with their own agenda and biases. In many respects it continued previous medical traditions; other aspects reveal brand new developments in Ottoman medicine and hospital management. Ottoman medical reality was one of coexistence and rivalry: traditional conceptions of medicine and health were believed and practiced side-by-side with new western-like concepts and techniques.
A Comparison of Collaborative and Traditional Instruction in Higher Education
ERIC Educational Resources Information Center
Gubera, Chip; Aruguete, Mara S.
2013-01-01
Although collaborative instructional techniques have become popular in college courses, it is unclear whether collaborative techniques can replace more traditional instructional methods. We examined the efficacy of collaborative courses (in-class, collaborative activities with no lectures) compared to traditional lecture courses (in-class,…
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
ERIC Educational Resources Information Center
Elrick, Mike
2003-01-01
Traditional techniques and gear are better suited for comfortable extended wilderness trips with high school students than are emerging technologies and techniques based on low-impact camping and petroleum-based clothing, which send students the wrong messages about ecological relatedness and sustainability. Traditional travel techniques and…
Peel, Sarah A; Hussain, Tarique; Cecelja, Marina; Abbas, Abeera; Greil, Gerald F; Chowienczyk, Philip; Spector, Tim; Smith, Alberto; Waltham, Matthew; Botnar, Rene M
2011-11-01
To accelerate and optimize black blood properties of the quadruple inversion recovery (QIR) technique for imaging the abdominal aortic wall. QIR inversion delays were optimized for different heart rates in simulations and phantom studies by minimizing the steady state magnetization of blood for T(1) = 100-1400 ms. To accelerate and improve black blood properties of aortic vessel wall imaging, the QIR prepulse was combined with zoom imaging and (a) "traditional" and (b) "trailing" electrocardiogram (ECG) triggering. Ten volunteers were imaged pre- and post-contrast administration using a conventional ECG-triggered double inversion recovery (DIR) and the two QIR implementations in combination with a zoom-TSE readout. The QIR implemented with "trailing" ECG-triggering resulted in consistently good blood suppression as the second inversion delay was timed during maximum systolic flow in the aorta. The blood signal-to-noise ratio and vessel wall to blood contrast-to-noise ratio, vessel wall sharpness, and image quality scores showed a statistically significant improvement compared with the traditional QIR implementation with and without ECG-triggering. We demonstrate that aortic vessel wall imaging can be accelerated with zoom imaging and that "trailing" ECG-triggering improves black blood properties of the aorta which is subject to motion and variable blood flow during the cardiac cycle. Copyright © 2011 Wiley Periodicals, Inc.
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk
2010-01-01
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than −40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of −20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe ‘ripples’ when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 °C to 71 °C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging. PMID:20224162
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk
2010-04-07
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm x 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 degrees C to 71 degrees C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging.
NASA Astrophysics Data System (ADS)
Karri, Naveen K.; Mo, Changki
2018-06-01
Structural reliability of thermoelectric generation (TEG) systems still remains an issue, especially for applications such as large-scale industrial or automobile exhaust heat recovery, in which TEG systems are subject to dynamic loads and thermal cycling. Traditional thermoelectric (TE) system design and optimization techniques, focused on performance alone, could result in designs that may fail during operation as the geometric requirements for optimal performance (especially the power) are often in conflict with the requirements for mechanical reliability. This study focused on reducing the thermomechanical stresses in a TEG system without compromising the optimized system performance. Finite element simulations were carried out to study the effect of TE element (leg) geometry such as leg length and cross-sectional shape under constrained material volume requirements. Results indicated that the element length has a major influence on the element stresses whereas regular cross-sectional shapes have minor influence. The impact of TE element stresses on the mechanical reliability is evaluated using brittle material failure theory based on Weibull analysis. An alternate couple configuration that relies on the industry practice of redundant element design is investigated. Results showed that the alternate configuration considerably reduced the TE element and metallization stresses, thereby enhancing the structural reliability, with little trade-off in the optimized performance. The proposed alternate configuration could serve as a potential design modification for improving the reliability of systems optimized for thermoelectric performance.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
Dai, Yunchao; Nasir, Mubasher; Zhang, Yulin; Gao, Jiakai; Lv, Yamin; Lv, Jialong
2018-01-01
Several predictive models and methods have been used for heavy metals bioavailability, but there is no universally accepted approach in evaluating the bioavailability of arsenic (As) in soil. The technique of diffusive gradients in thin-films (DGT) is a promising tool, but there is a considerable debate with respect to its suitability. The DGT method was compared with other traditional chemical extractions techniques (soil solution, NaHCO 3 , NH 4 Cl, HCl, and total As method) for estimating As bioavailability in soil based on a greenhouse experiment using Brassica chinensis grown in various soils from 15 provinces in China. In addition, we assessed whether these methods are independent of soil properties. The correlations between plant and soil As concentration measured with traditional extraction techniques were pH and iron oxide (Fe ox ) dependent, indicating that these methods are influenced by soil properties. In contrast, DGT measurements were independent of soil properties and also showed a better correlation coefficient than other traditional techniques. Thus, DGT technique is superior to traditional techniques and should be preferable for evaluating As bioavailability in different type of soils. Copyright © 2017 Elsevier Ltd. All rights reserved.
Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite
NASA Astrophysics Data System (ADS)
Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin
2017-09-01
State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all-electric spacecraft system design.
Bottenus, Nick; D’hooge, Jan; Trahey, Gregg E.
2017-01-01
The transverse oscillation (TO) technique can improve the estimation of tissue motion perpendicular to the ultrasound beam direction. TOs can be introduced using plane wave (PW) insonification and bi-lobed Gaussian apodisation (BA) on receive (abbreviated as PWTO). Furthermore, the TO frequency can be doubled after a heterodyning demodulation process is performed (abbreviated as PWTO*). This study is concerned with identifying the limitations of the PWTO technique in the specific context of myocardial deformation imaging with phased arrays and investigating the conditions in which it remains advantageous over traditional focused (FOC) beamforming. For this purpose, several tissue phantoms were simulated using Field II, undergoing a wide range of displacement magnitudes and modes (lateral, axial and rotational motion). The Cramer-Rao lower bound (CRLB) was used to optimize TO beamforming parameters and theoretically predict the fundamental tracking performance limits associated with the FOC, PWTO and PWTO* beamforming scenarios. This framework was extended to also predict performance for BA functions which are windowed by the physical aperture of the transducer, leading to higher lateral oscillations. It was found that windowed BA functions resulted in lower jitter errors compared to tradional BA functions. PWTO* outperformed FOC at all investigated SNR levels but only up to a certain displacement, with the advantage rapidly decreasing when SNR increased. These results suggest that PWTO* improves lateral tracking performance, but only when inter-frame displacements remain relatively low. The study concludes by translating these findings to a clinical environment by suggesting optimal scanner settings. PMID:27810806
Shao, Shi-Cheng; Burgess, Kevin S; Cruse-Sanders, Jennifer M; Liu, Qiang; Fan, Xu-Li; Huang, Hui; Gao, Jiang-Yun
2017-01-01
Due to increasing demand for medicinal and horticultural uses, the Orchidaceae is in urgent need of innovative and novel propagation techniques that address both market demand and conservation. Traditionally, restoration techniques have been centered on ex situ asymbiotic or symbiotic seed germination techniques that are not cost-effective, have limited genetic potential and often result in low survival rates in the field. Here, we propose a novel in situ advanced restoration-friendly program for the endangered epiphytic orchid species Dendrobium devonianum , in which a series of in situ symbiotic seed germination trials base on conspecific fungal isolates were conducted at two sites in Yunnan Province, China. We found that percentage germination varied among treatments and locations; control treatments (no inoculum) did not germinate at both sites. We found that the optimal treatment, having the highest in situ seed germination rate (0.94-1.44%) with no significant variation among sites, supported a warm, moist and fixed site that allowed for light penetration. When accounting for seed density, percentage germination was highest (2.78-2.35%) at low densities and did not vary among locations for the treatment that supported optimal conditions. Similarly for the same treatment, seed germination ranged from 0.24 to 5.87% among seasons but also did vary among sites. This study reports on the cultivation and restoration of an endangered epiphytic orchid species by in situ symbiotic seed germination and is likely to have broad application to the horticulture and conservation of the Orchidaceae.
Filament Winding Multifunctional Carbon Nanotube Composites of Various Dimensionality
NASA Astrophysics Data System (ADS)
Wells, Brian David
Carbon nanotubes (CNT) have been long considered an optimal material for composites due to their high strength, high modulus, and electrical/thermal conductivity. These composite materials have the potential to be used in the aerospace, computer, automotive, medical industry as well as many others. The nano dimensions of these structures make controlled alignment and distribution difficult using many production techniques. An area that shows promise for controlled alignment is the formation of CNT yarns. Different approaches have been used to create yarns with various winding angles and diameters. CNTs resemble traditional textile fiber structures due to their one-dimensional dimensions, axial strength and radial flexibility. One difference is, depending on the length, CNTs can have aspect ratios that far exceed those of traditional textile fibers. This can complicate processing techniques and cause agglomeration which prevents optimal structures from being created. However, with specific aspect ratios and spatial distributions a specific type of CNT, vertically aligned spinnable carbon nanotubes (VASCNTs), have interesting properties that allow carbon nanotubes to be drawn from an array in a continuous aligned web. This dissertation examines the feasibility of combining VASCNTs with another textile manufacturing process, filament winding, to create structures with various levels of dimensionality. While yarn formation with CNTs has been largely studied, there has not been significant work studying the use of VASCNTs to create composite materials. The studies that have been produces revolve around mixing CNTs into epoxy or creating uni-directional wound structures. In this dissertation VASCNTs are used to create filament wound materials with various degrees of alignment. These structures include 1 dimensional coatings applied to non-conductive polymer monofilaments, two dimensional multifunctional adhesive films, and three dimensional hybrid-nano composites. The angle of alignment between the individual CNTs relative to the overall structure was used to affect the electrical properties in all of these structures and the mechanical properties of the adhesive films and hybrid-nano composites. Varying the concentration of CNT was also found to have a significant effect on the electrical and mechanical properties. The variable properties that can be created with these production techniques allow users to engineer the structure to match the desired property.
A microarchitecture for resource-limited superscalar microprocessors
NASA Astrophysics Data System (ADS)
Basso, Todd David
1999-11-01
Microelectronic components in space and satellite systems must be resistant to total dose radiation, single-even upset, and latchup in order to accomplish their missions. The demand for inexpensive, high-volume, radiation hardened (rad-hard) integrated circuits (ICs) is expected to increase dramatically as the communication market continues to expand. Motorola's Complementary Gallium Arsenide (CGaAsTM) technology offers superior radiation tolerance compared to traditional CMOS processes, while being more economical than dedicated rad-hard CMOS processes. The goals of this dissertation are to optimize a superscalar microarchitecture suitable for CGaAsTM microprocessors, develop circuit techniques for such applications, and evaluate the potential of CGaAsTM for the development of digital VLSI circuits. Motorola's 0.5 mum CGaAsTM process is summarized and circuit techniques applicable to digital CGaAsTM are developed. Direct coupled FET, complementary, and domino logic circuits are compared based on speed, power, area, and noise margins. These circuit techniques are employed in the design of a 600 MHz PowerPCTM arithmetic logic unit. The dissertation emphasizes CGaASTM-specific design considerations, specifically, low integration level. A baseline superscalar microarchitecture is defined and SPEC95 integer benchmark simulations are used to evaluate the applicability of advanced architectural features to microprocessors having low integration levels. The performance simulations center around the optimization of a simple superscalar core, small-scale branch prediction, instruction prefetching, and an off-chip primary data cache. The simulation results are used to develop a superscalar microarchitecture capable of outperforming a comparable sequential pipeline, while using only 500,000 transistors. The architecture, running at 200 MHz, is capable of achieving an estimated 153 MIPS, translating to a 27% performance increase over a comparable traditional pipelined microprocessor. The proposed microarchitecture is process independent and can be applied to low-cost, or transistor-limited applications. The proposed microarchitecture is implemented in the design of a 0.35 mum CMOS microprocessor, and the design of a 0.5 mum CGaAsTM micro-processor. The two technologies and designs are compared to ascertain the state of CGaAsTM for digital VLSI applications.
Temporomandibular joint arthroscopy technique using a single working cannula.
Srouji, S; Oren, D; Zoabi, A; Ronen, O; Zraik, H
2016-11-01
The traditional arthroscopy technique includes the creation of three ports in order to enable visualization, operation, and arthrocentesis. The aim of this study was to assess an advanced temporomandibular joint (TMJ) arthroscopy technique that requires only a single cannula, through which a one-piece instrument containing a visualization canal, irrigation canal, and a working canal is inserted, as an alternative to the traditional double-puncture technique. This retrospective study assessed eight patients (13 TMJs) with pain and/or limited range of movement that was refractory to conservative therapy, who were treated between June 2015 and December 2015. The temporomandibular joint disorder (TMD) was diagnosed by physical examination and mouth opening measurements. The duration of surgery was recorded and compared to that documented for traditional arthroscopies performed by the same surgeon. Operative single-cannula arthroscopy (OSCA) was performed using a holmium YAG (Ho:YAG) 230μm fibre laser for ablation. The OSCA technique proved effective in improving mouth opening in all patients (mean increase 9.12±1.96mm) and in reducing pain (mean visual analogue scale decrease of 3.25±1.28). The operation time was approximately half that of the traditional technique. The OSCA technique is as efficient as the traditional technique, is simple to learn, and is simpler to execute. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Development of a PC interface board for true color control using an Ar Kr white-light laser
NASA Astrophysics Data System (ADS)
Shin, Yongjin; Park, Sohee; Kim, Youngseop; Lee, Jangwoen
2006-06-01
For the optimal laser display, it is crucial to select and control color signals of proper wavelengths in order to construct a wide range of laser display colors. In traditional laser display schemes, color control has been achieved through the mechanical manipulation of red, green, and blue (RGB) laser beam intensities using color filters. To maximize the effect of a laser display and its color contents, it is desirable to generate laser beams with wide selection of wavelengths. We present an innovative laser display control technique, which generates six channel laser wavelengths from a white-light laser using a RF-controlled polychromatic acousto optical modulator (PCAOM). This technique enables us not only to control the intensity of individual channels, but also to achieve true color signals for the laser beam display including RGB, yellow, cyan, and violet (YCV), and other intermediate colors. For the optimal control of the PCAOM and galvano-mirror, we designed and fabricated a PC interface board. Using this PC control, we separated the white-light from an Ar-Kr mixed gas laser into various wavelengths and reconstructed them into different color schemes. Also we demonstrated the effective control and simultaneous display of reconstructed true color laser beams on a flat screen.
Resource Costs Give Optimization the Edge
C.M. Eddins
1996-01-01
To optimize or not to optimize - that is the question practically every sawmill has considered at some time or another. Edger and trimmer optimization is a particularly hot topic, as these are among the most wasteful areas of the sawmill because trimmer and edger operators traditionally tend to over edge or trim. By its very definition, optimizing equipment seeks to...
Matsukawa, Keitaro; Yato, Yoshiyuki; Kato, Takashi; Imabayashi, Hideaki; Asazuma, Takashi; Nemoto, Koichi
2014-02-15
The insertional torque of pedicle screws using the cortical bone trajectory (CBT) was measured in vivo. To investigate the effectiveness of the CBT technique by measurement of the insertional torque. The CBT follows a mediolateral and caudocephalad directed path, engaging with cortical bone maximally from the pedicle to the vertebral body. Some biomechanical studies have demonstrated favorable characteristics of the CBT technique in cadaveric lumbar spine. However, no in vivo study has been reported on the mechanical behavior of this new trajectory. The insertional torque of pedicle screws using CBT and traditional techniques were measured intraoperatively in 48 consecutive patients. A total of 162 screws using the CBT technique and 36 screws using the traditional technique were compared. In 8 of 48 patients, the side-by-side comparison of 2 different insertional techniques for each vertebra were performed, which formed the H group. In addition, the insertional torque was correlated with bone mineral density. The mean maximum insertional torque of CBT screws and traditional screws were 2.49 ± 0.99 Nm and 1.24 ± 0.54 Nm, respectively. The CBT screws showed 2.01 times higher torque and the difference was significant between the 2 techniques (P < 0.01). In the H group, the insertional torque were 2.71 ± 1.36 Nm in the CBT screws and 1.58 ± 0.44 Nm in the traditional screws. The CBT screws demonstrated 1.71 times higher torque and statistical significance was achieved (P < 0.01). Positive linear correlations between maximum insertional torque and bone mineral density were found in both technique, the correlation coefficient of traditional screws (r = 0.63, P < 0.01) was higher than that of the CBT screws (r = 0.59, P < 0.01). The insertional torque using the CBT technique is about 1.7 times higher than the traditional technique. 2.
Wilk, Brian L
2015-01-01
Over the course of the past two to three decades, intraoral digital impression systems have gained acceptance due to high accuracy and ease of use as they have been incorporated into the fabrication of dental implant restorations. The use of intraoral digital impressions enables the clinician to produce accurate restorations without the unpleasant aspects of traditional impression materials and techniques. This article discusses the various types of digital impression systems and their accuracy compared to traditional impression techniques. The cost, time, and patient satisfaction components of both techniques will also be reviewed.
Intraoperative virtual brain counseling
NASA Astrophysics Data System (ADS)
Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando
1997-06-01
Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.
Character Recognition Using Genetically Trained Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diniz, C.; Stantz, K.M.; Trahan, M.W.
1998-10-01
Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfidmore » recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of noise significantly degrades character recognition efficiency, some of which can be overcome by adding noise during training and optimizing the form of the network's activation fimction.« less
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
Silicon Micromachining for Terahertz Component Development
NASA Technical Reports Server (NTRS)
Chattopadhyay, Goutam; Reck, Theodore J.; Jung-Kubiak, Cecile; Siles, Jose V.; Lee, Choonsup; Lin, Robert; Mehdi, Imran
2013-01-01
Waveguide component technology at terahertz frequencies has come of age in recent years. Essential components such as ortho-mode transducers (OMT), quadrature hybrids, filters, and others for high performance system development were either impossible to build or too difficult to fabricate with traditional machining techniques. With micromachining of silicon wafers coated with sputtered gold it is now possible to fabricate and test these waveguide components. Using a highly optimized Deep Reactive Ion Etching (DRIE) process, we are now able to fabricate silicon micromachined waveguide structures working beyond 1 THz. In this paper, we describe in detail our approach of design, fabrication, and measurement of silicon micromachined waveguide components and report the results of a 1 THz canonical E-plane filter.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Damage tolerant design using collapse techniques
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1982-01-01
A new approach to the design of structures for improved global damage tolerance is presented. In its undamaged condition the structure is designed subject to strength, displacement and buckling constraints. In the damaged condition the only constraint is that the structure will not collapse. The collapse load calculation is formulated as a maximization problem and solved by an interior extended penalty function. The design for minimum weight subject to constraints on the undamaged structure and a specified level of the collapse load is a minimization problem which is also solved by a penalty function formulation. Thus the overall problem is of a nested or multilevel optimization. Examples are presented to demonstrate the difference between the present and more traditional approaches.
Novel technologies provide more engineering strategies for amino acid-producing microorganisms.
Gu, Pengfei; Su, Tianyuan; Qi, Qingsheng
2016-03-01
Traditionally, amino acid-producing strains were obtained by random mutagenesis and subsequent selection. With the development of genetic and metabolic engineering techniques, various microorganisms with high amino acid production yields are now constructed by rational design of targeted biosynthetic pathways. Recently, novel technologies derived from systems and synthetic biology have emerged and open a new promising avenue towards the engineering of amino acid production microorganisms. In this review, these approaches, including rational engineering of rate-limiting enzymes, real-time sensing of end-products, pathway optimization on the chromosome, transcription factor-mediated strain improvement, and metabolic modeling and flux analysis, were summarized with regard to their application in microbial amino acid production.
Development of a Pressure Switched Microfluidic Cell Sorter
NASA Astrophysics Data System (ADS)
Ozbay, Baris; Jones, Alex; Gibson, Emily
2009-10-01
Lab on a chip technology allows for the replacement of traditional cell sorters with microfluidic devices which can be produced less expensively and are more compact. Additionally, the compact nature of microfluidic cell sorters may lead to the realization of their application in point-of-care medical devices. Though techniques have been demonstrated previously for sorting in microfluidic devices with optical or electro-osmotic switching, both of these techniques are expensive and more difficult to implement than pressure switching. This microfluidic cell sorter design also allows for easy integration with optical spectroscopy for identification of cell type. Our current microfluidic device was fabricated with polydimethylsiloxane (PDMS), a polymer that houses the channels, which is then chemically bonded to a glass slide. The flow of fluid through the device is controlled by pressure controllers, and the switching of the cells is accomplished with the use of a high performance pressure controller interfaced with a computer. The cells are fed through the channels with the use of hydrodynamic focusing techniques. Once the experimental setup is fully functional the objective will be to determine switching rates, explore techniques to optimize these rates, and experiment with sorting of other biomolecules including DNA.
Construction and Potential Applications of Biosensors for Proteins in Clinical Laboratory Diagnosis
Liu, Xuan
2017-01-01
Biosensors for proteins have shown attractive advantages compared to traditional techniques in clinical laboratory diagnosis. In virtue of modern fabrication modes and detection techniques, various immunosensing platforms have been reported on basis of the specific recognition between antigen-antibody pairs. In addition to profit from the development of nanotechnology and molecular biology, diverse fabrication and signal amplification strategies have been designed for detection of protein antigens, which has led to great achievements in fast quantitative and simultaneous testing with extremely high sensitivity and specificity. Besides antigens, determination of antibodies also possesses great significance for clinical laboratory diagnosis. In this review, we will categorize recent immunosensors for proteins by different detection techniques. The basic conception of detection techniques, sensing mechanisms, and the relevant signal amplification strategies are introduced. Since antibodies and antigens have an equal position to each other in immunosensing, all biosensing strategies for antigens can be extended to antibodies under appropriate optimizations. Biosensors for antibodies are summarized, focusing on potential applications in clinical laboratory diagnosis, such as a series of biomarkers for infectious diseases and autoimmune diseases, and an evaluation of vaccine immunity. The excellent performances of these biosensors provide a prospective space for future antibody-detection-based disease serodiagnosis. PMID:29207528
Construction and Potential Applications of Biosensors for Proteins in Clinical Laboratory Diagnosis.
Liu, Xuan; Jiang, Hui
2017-12-04
Biosensors for proteins have shown attractive advantages compared to traditional techniques in clinical laboratory diagnosis. In virtue of modern fabrication modes and detection techniques, various immunosensing platforms have been reported on basis of the specific recognition between antigen-antibody pairs. In addition to profit from the development of nanotechnology and molecular biology, diverse fabrication and signal amplification strategies have been designed for detection of protein antigens, which has led to great achievements in fast quantitative and simultaneous testing with extremely high sensitivity and specificity. Besides antigens, determination of antibodies also possesses great significance for clinical laboratory diagnosis. In this review, we will categorize recent immunosensors for proteins by different detection techniques. The basic conception of detection techniques, sensing mechanisms, and the relevant signal amplification strategies are introduced. Since antibodies and antigens have an equal position to each other in immunosensing, all biosensing strategies for antigens can be extended to antibodies under appropriate optimizations. Biosensors for antibodies are summarized, focusing on potential applications in clinical laboratory diagnosis, such as a series of biomarkers for infectious diseases and autoimmune diseases, and an evaluation of vaccine immunity. The excellent performances of these biosensors provide a prospective space for future antibody-detection-based disease serodiagnosis.
The future of human DNA vaccines.
Li, Lei; Saade, Fadi; Petrovsky, Nikolai
2012-12-31
DNA vaccines have evolved greatly over the last 20 years since their invention, but have yet to become a competitive alternative to conventional protein or carbohydrate based human vaccines. Whilst safety concerns were an initial barrier, the Achilles heel of DNA vaccines remains their poor immunogenicity when compared to protein vaccines. A wide variety of strategies have been developed to optimize DNA vaccine immunogenicity, including codon optimization, genetic adjuvants, electroporation and sophisticated prime-boost regimens, with each of these methods having its advantages and limitations. Whilst each of these methods has contributed to incremental improvements in DNA vaccine efficacy, more is still needed if human DNA vaccines are to succeed commercially. This review foresees a final breakthrough in human DNA vaccines will come from application of the latest cutting-edge technologies, including "epigenetics" and "omics" approaches, alongside traditional techniques to improve immunogenicity such as adjuvants and electroporation, thereby overcoming the current limitations of DNA vaccines in humans. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
Multi Sensor Fusion Using Fitness Adaptive Differential Evolution
NASA Astrophysics Data System (ADS)
Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam
The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).
Belal, Mouhammad; Al-Mariri, Ayman; Hallab, Lila; Hamad, Ibtesam
2013-02-15
Cronobacter spp. (formerly Enterobacter sakazakii) is an emerging food-borne pathogen that causes severe meningitis, sepsis, and necrotizing enterocolitis in neonates and infants. These infections have been reported from different parts of the world. The epidemiology and reservoir of Cronobacter spp. are still unknown, and most strains have been isolated from clinical specimens and from a variety of foods, including cheese, meat, milk, vegetables, grains, spices, and herbs. Our study aimed to detect and isolate Cronobacter spp. from different Syrian samples of spices, medicinal herbs and liquorices, depending on the pigment production and biochemical profile of isolates and PCR technique. This PCR method, which provides a powerful tool for rapid, specific, and sensitive detection of Cronobacter spp., is considered a reliable alternative to traditional bacteriological methods. This study revealed that the percentage of Cronobacter spp. was 94%, 52%, and 32% in liquorice, spices and medicinal herbs, respectively. In addition, it assured that the optimal enhancing growth temperature was 44°C, and optimal enhancing growth pH was 5.
Wagner, James M; Alper, Hal S
2016-04-01
Coupling the tools of synthetic biology with traditional molecular genetic techniques can enable the rapid prototyping and optimization of yeast strains. While the era of yeast synthetic biology began in the well-characterized model organism Saccharomyces cerevisiae, it is swiftly expanding to include non-conventional yeast production systems such as Hansenula polymorpha, Kluyveromyces lactis, Pichia pastoris, and Yarrowia lipolytica. These yeasts already have roles in the manufacture of vaccines, therapeutic proteins, food additives, and biorenewable chemicals, but recent synthetic biology advances have the potential to greatly expand and diversify their impact on biotechnology. In this review, we summarize the development of synthetic biological tools (including promoters and terminators) and enabling molecular genetics approaches that have been applied in these four promising alternative biomanufacturing platforms. An emphasis is placed on synthetic parts and genome editing tools. Finally, we discuss examples of synthetic tools developed in other organisms that can be adapted or optimized for these hosts in the near future. Copyright © 2015 Elsevier Inc. All rights reserved.
A Hybrid Approach for Efficient Modeling of Medium-Frequency Propagation in Coal Mines
Brocker, Donovan E.; Sieber, Peter E.; Waynert, Joseph A.; Li, Jingcheng; Werner, Pingjuan L.; Werner, Douglas H.
2015-01-01
An efficient procedure for modeling medium frequency (MF) communications in coal mines is introduced. In particular, a hybrid approach is formulated and demonstrated utilizing ideal transmission line equations to model MF propagation in combination with full-wave sections used for accurate simulation of local antenna-line coupling and other near-field effects. This work confirms that the hybrid method accurately models signal propagation from a source to a load for various system geometries and material compositions, while significantly reducing computation time. With such dramatic improvement to solution times, it becomes feasible to perform large-scale optimizations with the primary motivation of improving communications in coal mines both for daily operations and emergency response. Furthermore, it is demonstrated that the hybrid approach is suitable for modeling and optimizing large communication networks in coal mines that may otherwise be intractable to simulate using traditional full-wave techniques such as moment methods or finite-element analysis. PMID:26478686
Purification of an Inducible DNase from a Thermophilic Fungus
Landry, Kyle S.; Vu, Andrea; Levin, Robert E.
2014-01-01
The ability to induce an extracellular DNase from a novel thermophilic fungus was studied and the DNAse purified using both traditional and innovative purification techniques. The isolate produced sterile hyphae under all attempted growing conditions, with an average diameter of 2 μm and was found to have an optimal temperature of 45 °C and a maximum of 65 °C. Sequencing of the internal transcribed region resulted in a 91% match with Chaetomium sp., suggesting a new species, but further clarification on this point is needed. The optimal temperature for DNase production was found to be 55 °C and was induced by the presence of DNA and/or deoxyribose. Static growth of the organism resulted in significantly higher DNase production than agitated growth. The DNase was purified 145-fold using a novel affinity membrane purification system with 25% of the initial enzyme activity remaining. Electrophoresis of the purified enzyme resulted in a single protein band, indicating DNase homogeneity. PMID:24447923
Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina
2017-04-11
nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.
NASA Astrophysics Data System (ADS)
Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi
2018-04-01
Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.
Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina
2017-01-01
nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters. PMID:28398225
Fundamentals and techniques of nonimaging optics research
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1987-07-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line or trumpet concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. Present efforts can be classed into two main areas: (1) classical geometrical nonimaging optics, and (2) logical extensions of nonimaging concepts to the physical optics domain.
Fundamentals and techniques of nonimaging optics research at the University of Chicago
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1986-11-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. In the most recent phase, our efforts can be classed into two main areas; (a) ''classical'' geometrical nonimaging optics; and (b) logical extensions of nonimaging concepts to the physical optics domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Credille, Jennifer; Owens, Elizabeth
This capstone offers the introduction of Lean concepts to an office activity to demonstrate the versatility of Lean. Traditionally Lean has been associated with process improvements as applied to an industrial atmosphere. However, this paper will demonstrate that implementing Lean concepts within an office activity can result in significant process improvements. Lean first emerged with the conception of the Toyota Production System. This innovative concept was designed to improve productivity in the automotive industry by eliminating waste and variation. Lean has also been applied to office environments, however the limited literature reveals most Lean techniques within an office are restrictedmore » to one or two techniques. Our capstone confronts these restrictions by introducing a systematic approach that utilizes multiple Lean concepts. The approach incorporates: system analysis, system reliability, system requirements, and system feasibility. The methodical Lean outline provides tools for a successful outcome, which ensures the process is thoroughly dissected and can be achieved for any process in any work environment.« less
Design of the smart home system based on the optimal routing algorithm and ZigBee network.
Jiang, Dengying; Yu, Ling; Wang, Fei; Xie, Xiaoxia; Yu, Yongsheng
2017-01-01
To improve the traditional smart home system, its electric wiring, networking technology, information transmission and facility control are studied. In this paper, we study the electric wiring, networking technology, information transmission and facility control to improve the traditional smart home system. First, ZigBee is used to replace the traditional electric wiring. Second, a network is built to connect lots of wireless sensors and facilities, thanks to the capability of ZigBee self-organized network and Genetic Algorithm-Particle Swarm Optimization Algorithm (GA-PSOA) to search for the optimal route. Finally, when the smart home system is connected to the internet based on the remote server technology, home environment and facilities could be remote real-time controlled. The experiments show that the GA-PSOA reduce the system delay and decrease the energy consumption of the wireless system.
Design of the smart home system based on the optimal routing algorithm and ZigBee network
Xie, Xiaoxia
2017-01-01
To improve the traditional smart home system, its electric wiring, networking technology, information transmission and facility control are studied. In this paper, we study the electric wiring, networking technology, information transmission and facility control to improve the traditional smart home system. First, ZigBee is used to replace the traditional electric wiring. Second, a network is built to connect lots of wireless sensors and facilities, thanks to the capability of ZigBee self-organized network and Genetic Algorithm-Particle Swarm Optimization Algorithm (GA-PSOA) to search for the optimal route. Finally, when the smart home system is connected to the internet based on the remote server technology, home environment and facilities could be remote real-time controlled. The experiments show that the GA-PSOA reduce the system delay and decrease the energy consumption of the wireless system. PMID:29131868
Serra J. Hoagland
2017-01-01
Traditional ecological knowledge (TEK) has been recognized within indigenous communities for millennia; however, traditional ecological knowledge has received growing attention within the western science (WS) paradigm over the past twenty-five years. Federal agencies, national organizations, and university programs dedicated to natural resource management are beginning...
Chopped random-basis quantum optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Three-dimensional circumferential liposuction of the overweight or obese upper arm.
Hong, Yoon Gi; Sim, Hyung Bo; Lee, Mu Young; Seo, Sang Won; Chang, Choong Hyun; Yeo, Kwan Koo; Kim, June-kyu
2012-06-01
Due to recent trends in liposuction, anatomic consideration of the body's fatty layers is essential. Based on this knowledge, a circumferential approach to achieving maximal aesthetic results is highlighted. In the upper arm, aspiration of fat from only the posterolateral region can result in skin flaccidity and disharmony of the overall balance of the upper arm contour. Different suction techniques were applied depending on the degree of fat accumulation. If necessary, the operation area was extended around the axillary and scapular regions to overcome the limitations of the traditional method and to achieve optimal effects. To maximize skin contracture and redraping, the authors developed three-dimensional circumferential liposuction (3D-CL) based on two concepts: circumferential aspiration of the upper arm, to which was applied different fluid infiltration and liposuction techniques in three anatomic compartments (anteromedial, anterolateral, and posterolateral), and extension of liposuction to the periaxillar and parascarpular areas. A total of 57 female patients underwent liposuction of their excess arm fat using this technique. The authors achieved their aesthetic goals of a straightened inferior brachial border and a more slender body contour. Complications occurred for five patients including irregularity, incision-site scar, and transient pigmentation. Through 3D-CL, the limitations of traditional upper arm liposuction were overcome, and a slender arm contour with a straightened inferior brachial border was produced. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors at http://www.springer.com/00266.
Optimization of beam orientation in radiotherapy using planar geometry
NASA Astrophysics Data System (ADS)
Haas, O. C. L.; Burnham, K. J.; Mills, J. A.
1998-08-01
This paper proposes a new geometrical formulation of the coplanar beam orientation problem combined with a hybrid multiobjective genetic algorithm. The approach is demonstrated by optimizing the beam orientation in two dimensions, with the objectives being formulated using planar geometry. The traditional formulation of the objectives associated with the organs at risk has been modified to account for the use of complex dose delivery techniques such as beam intensity modulation. The new algorithm attempts to replicate the approach of a treatment planner whilst reducing the amount of computation required. Hybrid genetic search operators have been developed to improve the performance of the genetic algorithm by exploiting problem-specific features. The multiobjective genetic algorithm is formulated around the concept of Pareto optimality which enables the algorithm to search in parallel for different objectives. When the approach is applied without constraining the number of beams, the solution produces an indication of the minimum number of beams required. It is also possible to obtain non-dominated solutions for various numbers of beams, thereby giving the clinicians a choice in terms of the number of beams as well as in the orientation of these beams.
Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
Measurement of the main and critical parameters for optimal laser treatment of heart disease
NASA Astrophysics Data System (ADS)
Kabeya, FB; Abrahamse, H.; Karsten, AE
2017-10-01
Laser light is frequently used in the diagnosis and treatment of patients. As in traditional treatments such as medication, bypass surgery, and minimally invasive ways, laser treatment can also fail and present serious side effects. The true reason for laser treatment failure or the side effects thereof, remains unknown. From the literature review conducted, and experimental results generated we conclude that an optimal laser treatment for coronary artery disease (named heart disease) can be obtained if certain critical parameters are correctly measured and understood. These parameters include the laser power, the laser beam profile, the fluence rate, the treatment time, as well as the absorption and scattering coefficients of the target treatment tissue. Therefore, this paper proposes different, accurate methods for the measurement of these critical parameters to determine the optimal laser treatment of heart disease with a minimal risk of side effects. The results from the measurement of absorption and scattering properties can be used in a computer simulation package to predict the fluence rate. The computing technique is a program based on the random number (Monte Carlo) process and probability statistics to track the propagation of photons through a biological tissue.
Design of controlled elastic and inelastic structures
NASA Astrophysics Data System (ADS)
Reinhorn, A. M.; Lavan, O.; Cimellaro, G. P.
2009-12-01
One of the founders of structural control theory and its application in civil engineering, Professor Emeritus Tsu T. Soong, envisioned the development of the integral design of structures protected by active control devices. Most of his disciples and colleagues continuously attempted to develop procedures to achieve such integral control. In his recent papers published jointly with some of the authors of this paper, Professor Soong developed design procedures for the entire structure using a design — redesign procedure applied to elastic systems. Such a procedure was developed as an extension of other work by his disciples. This paper summarizes some recent techniques that use traditional active control algorithms to derive the most suitable (optimal, stable) control force, which could then be implemented with a combination of active, passive and semi-active devices through a simple match or more sophisticated optimal procedures. Alternative design can address the behavior of structures using Liapunov stability criteria. This paper shows a unified procedure which can be applied to both elastic and inelastic structures. Although the implementation does not always preserve the optimal criteria, it is shown that the solutions are effective and practical for design of supplemental damping, stiffness enhancement or softening, and strengthening or weakening.
Lin, Cheng Yu; Kikuchi, Noboru; Hollister, Scott J
2004-05-01
An often-proposed tissue engineering design hypothesis is that the scaffold should provide a biomimetic mechanical environment for initial function and appropriate remodeling of regenerating tissue while concurrently providing sufficient porosity for cell migration and cell/gene delivery. To provide a systematic study of this hypothesis, the ability to precisely design and manufacture biomaterial scaffolds is needed. Traditional methods for scaffold design and fabrication cannot provide the control over scaffold architecture design to achieve specified properties within fixed limits on porosity. The purpose of this paper was to develop a general design optimization scheme for 3D internal scaffold architecture to match desired elastic properties and porosity simultaneously, by introducing the homogenization-based topology optimization algorithm (also known as general layout optimization). With an initial target for bone tissue engineering, we demonstrate that the method can produce highly porous structures that match human trabecular bone anisotropic stiffness using accepted biomaterials. In addition, we show that anisotropic bone stiffness may be matched with scaffolds of widely different porosity. Finally, we also demonstrate that prototypes of the designed structures can be fabricated using solid free-form fabrication (SFF) techniques.
Forest control and regulation ... a comparison of traditional methods and alternatives
LeRoy C. Hennes; Michael J. Irving; Daniel I. Navon
1971-01-01
Two traditional techniques of forest control and regulation-formulas and area-volume check-are compared to linear programing, as used in a new computerized planning system called Timber Resource Allocation Method ( Timber RAM). Inventory data from a National Forest in California illustrate how each technique is used. The traditional methods are simpler to apply and...
Advances in fragment-based drug discovery platforms.
Orita, Masaya; Warizaya, Masaichi; Amano, Yasushi; Ohno, Kazuki; Niimi, Tatsuya
2009-11-01
Fragment-based drug discovery (FBDD) has been established as a powerful alternative and complement to traditional high-throughput screening techniques for identifying drug leads. At present, this technique is widely used among academic groups as well as small biotech and large pharmaceutical companies. In recent years, > 10 new compounds developed with FBDD have entered clinical development, and more and more attention in the drug discovery field is being focused on this technique. Under the FBDD approach, a fragment library of relatively small compounds (molecular mass = 100 - 300 Da) is screened by various methods and the identified fragment hits which normally weakly bind to the target are used as starting points to generate more potent drug leads. Because FBDD is still a relatively new drug discovery technology, further developments and optimizations in screening platforms and fragment exploitation can be expected. This review summarizes recent advances in FBDD platforms and discusses the factors important for the successful application of this technique. Under the FBDD approach, both identifying the starting fragment hit to be developed and generating the drug lead from that starting fragment hit are important. Integration of various techniques, such as computational technology, X-ray crystallography, NMR, surface plasmon resonance, isothermal titration calorimetry, mass spectrometry and high-concentration screening, must be applied in a situation-appropriate manner.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Khaira, Gurdaman Singh
Rapid progress in the semi-conductor industry has pushed for smaller feature sizes on integrated electronic circuits. Current photo-lithographic techniques for nanofabrication have reached their technical limit and are problematic when printing features small enough to meet future industrial requirements. "Bottom-up'' techniques, such as the directed self-assembly (DSA) of block copolymers (BCP), are the primary contenders to compliment current "top-down'' photo-lithography ones. For industrial requirements, the defect density from DSA needs to be less than 1 defect per 10 cm by 10 cm. Knowledge of both material synthesis and the thermodynamics of the self-assembly process are required before optimal operating conditions can be found to produce results adequate for industry. The work present in this thesis is divided into three chapters, each discussing various aspects of DSA as studied via a molecular model that contains the essential physics of BCP self-assembly. Though there are various types of guiding fields that can be used to direct BCPs over large wafer areas with minimum defects, this study focuses only on chemically patterned substrates. The first chapter addresses optimal pattern design by describing a framework where molecular simulations of various complexities are coupled with an advanced optimization technique to find a pattern that directs a target morphology. It demonstrates the first ever study where BCP self-assembly on a patterned substrate is optimized using a three-dimensional description of the block-copolymers. For problems pertaining to DSA, the methodology is shown to converge much faster than the traditional random search approach. The second chapter discusses the metrology of BCP thin films using TEM tomography and X-ray scattering techniques, such as CDSAXS and GISAXS. X-ray scattering has the advantage of being able to quickly probe the average structure of BCP morphologies over large wafer areas; however, deducing the BCP morphology from the information in inverse space is a challenging task. Using the optimization techniques and molecular simulations discussed in the first chapter, a methodology to reconstruct BCP morphology from X-ray scattering data is described. It is shown that only a handful of simulation parameters that come directly from experiment are able to describe the morphologies observed from real X-ray scattering experiments. The last chapter focuses on the use of solvents to assist the self-assembly of BCPs. Additional functionality to capture the process of solvent annealing is also discussed. The bulk behavior of solvated mixtures of BCPs with solvents of various affinities is described, and the results are consistent with the experimentally observed behavior of BCPs in the presence of solvents.
Endoscopic versus traditional saphenous vein harvesting: a prospective, randomized trial.
Allen, K B; Griffith, G L; Heimansohn, D A; Robison, R J; Matheny, R G; Schier, J J; Fitzgerald, E B; Shaar, C J
1998-07-01
Saphenous vein harvested with a traditional longitudinal technique often results in leg wound complications. An alternative endoscopic harvest technique may decrease these complications. One hundred twelve patients scheduled for elective coronary artery bypass grafting were prospectively randomized to have vein harvested using either an endoscopic (group A, n = 54) or traditional technique (group B, n = 58). Groups A and B, respectively, were similar with regard to length of vein harvested (41 +/- 8 cm versus 40 +/- 14 cm), bypasses done (4.1 +/- 1.1 versus 4.2 +/- 1.4), age, preoperative risk stratification, and risks for wound complication (diabetes, sex, obesity, preoperative anemia, hypoalbuminemia, and peripheral vascular disease). Leg wound complications were significantly (p < or = 0.02) reduced in group A (4% [2 of 51] versus 19% [11 of 58]). Univariate analysis identified traditional incision (p < or = 0.02) and diabetes (p < or = 0.05) as wound complication risk factors. Multiple logistic regression analysis identified only the traditional harvest technique as a risk factor for leg wound complications with no significant interaction between harvest technique and any preoperative risk factor (p < or = 0.03). Harvest rate (0.9 +/- 0.4 cm/min versus 1.2 +/- 0.5 cm/min) was slower for group A (p < or = 0.02) and conversion from endoscopic to a traditional harvest occurred in 5.6% (3 of 54) of patients. In a prospective, randomized trial, saphenous vein harvested endoscopically was associated with fewer wound complications than the traditional longitudinal method.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Abbott, Kathy H.
1987-01-01
To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.
Airline Maintenance Manpower Optimization from the De Novo Perspective
NASA Astrophysics Data System (ADS)
Liou, James J. H.; Tzeng, Gwo-Hshiung
Human resource management (HRM) is an important issue for today’s competitive airline marketing. In this paper, we discuss a multi-objective model designed from the De Novo perspective to help airlines optimize their maintenance manpower portfolio. The effectiveness of the model and solution algorithm is demonstrated in an empirical study of the optimization of the human resources needed for airline line maintenance. Both De Novo and traditional multiple objective programming (MOP) methods are analyzed. A comparison of the results with those of traditional MOP indicates that the proposed model and solution algorithm does provide better performance and an improved human resource portfolio.
Simunovic, Marko; Coates, Angela; Smith, Andrew; Thabane, Lehana; Goldsmith, Charles H; Levine, Mark N
2013-12-01
Theory suggests the uptake of a medical innovation is influenced by how potential adopters perceive innovation characteristics and by characteristics of potential adopters. Innovation adoption is slow among the first 20% of individuals in a target group and then accelerates. The Quality Initiative in Rectal Cancer (QIRC) trial assessed if rectal cancer surgery outcomes could be improved through surgeon participation in the QIRC strategy. We tested if traditional uptake of innovation concepts applied to surgeons in the experimental arm of the trial. The QIRC strategy included workshops, access to opinion leaders, intraoperative demonstrations, postoperative questionnaires, and audit and feedback. For intraoperative demonstrations, a participating surgeon invited an outside surgeon to demonstrate optimal rectal surgery techniques. We used surgeon timing in a demonstration to differentiate early and late adopters of the QIRC strategy. Surgeons completed surveys on perceptions of the strategy and personal characteristics. Nineteen of 56 surgeons (34%) requested an operative demonstration on their first case of rectal surgery. Early and late adopters had similar perceptions of the QIRC strategy and similar characteristics. Late adopters were less likely than early adopters to perceive an advantage for the surgical techniques promoted by the trial (p = 0.023). Most traditional diffusion of innovation concepts did not apply to surgeons in the QIRC trial, with the exception of the importance of perceptions of comparative advantage.
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Abbott, Kathy H.
1987-01-01
A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.
Thermally evaporated conformal thin films on non-traditional/non-planar substrates
NASA Astrophysics Data System (ADS)
Pulsifer, Drew Patrick
Conformal thin films have a wide variety of uses in the microelectronics, optics, and coatings industries. The ever-increasing capabilities of these conformal thin films have enabled tremendous technological advancement in the last half century. During this period, new thin-film deposition techniques have been developed and refined. While these techniques have remarkable performance for traditional applications which utilize planar substrates such as silicon wafers, they are not suitable for the conformal coating of non-traditional substrates such as biological material. The process of thermally evaporating a material under vacuum conditions is one of the oldest thin-film deposition techniques which is able to produce functional film morphologies. A drawback of thermally evaporated thin films is that they are not intrinsically conformal. To overcome this, while maintaining the advantages of thermal evaporation, a procedure for varying the substrates orientation with respect to the incident vapor flux during deposition was developed immediately prior to the research undertaken for this doctoral dissertation. This process was shown to greatly improve the conformality of thermally evaporated thin films. This development allows for several applications of thermally evaporated conformal thin films on non-planar/non-traditional substrates. Three settings in which to evaluate the improved conformal deposition of thermally evaporated thin films were investigated for this dissertation. In these settings the thin-film morphologies are of different types. In the first setting, a bioreplication approach was used to fabricate artificial visual decoys for the invasive species Agrilus planipennis, commonly known as the emerald ash borer (EAB). The mating behavior of this species involves an overflying EAB male pouncing on an EAB female at rest on an ash leaflet before copulation. The male spots the female on the leaflet by visually detecting the iridescent green color of the female's elytra. As rearing EAB and then deploying dead females as decoys is both arduous and inconvenient, the development of an artificial decoy would be of great interest to entomologists and foresters. A dead female EAB was used to make a negative die of nickel and a positive die of epoxy. The process of fabricating the paired dies utilized thermally evaporated conformal thin films in several critical steps. In order to conformally coat the EAB with nickel, the substrate stage holding the female EAB was periodically rocked and rotated during the deposition. This process was designed to result in a uniform thin film of ˜ 500-nm thickness with dense morphology. The nickel film was then reinforced through an electroforming process and mounted in a fixture which allowed it to be heated electrically. The corresponding positive die was replicated from the negative die through a series of successive castings. The final EAB positive die was fabricated from a hard epoxy material and attached to a fixture which allowed it to be heated while being pressed into the negative die. Decoys were then made by first depositing a quarter-wave-stack Bragg reflector on a polymer sheet and then stamping it with the pair of matched negative and positive dies to take the shape of the upper surface of an EAB female. As nearly 100 decoys were fabricated from just one EAB female, this bioreplication process is industrially scalable. Preliminary results from field trapping tests are indicative of success. For the second setting, a method of developing latent fingermarks with thermally evaporated conformal thin films was developed. Fingermarks have long been used to identify the individual who left them behind when he/she touched an object with the friction ridges of his/her hands. In many cases the fingermark which is left behind consists of sebaceous secretions which are not clearly visible under normal conditions. In order to make the fingermarks visible and identifiable, they are traditionally developed by either a physical technique which relies on a material preferentially sticking to sebaceous materials or a chemical technique which relies on a reaction with material within the fingermark. In this application, a columnar thin film (CTF) is deposited conformally over both the fingermark and the underlying substrate. The CTF is produced by the conformal-evaporated-film-by-rotation method, wherein the substrate with the fingermark upon it is held obliquely with respect to a vapor flux in a vacuum chamber. The substrate is then rapidly rotated about its surface normal resulting in a conformal film with columnar morphology. This technique was optimized for several substrates and compared with traditional development techniques. CTF development was found to be superior to traditional techniques in several cases. Use of the CTF was investigated for several types of particularly difficult to develop fingermarks such as those which consist of both bloody and nonbloody areas, and fingermarks on fired cartridge casings. The CTF technique's sensitivity was also compared to that of traditional development techniques. Finally, the CTF technique was compared with another thin film deposition technique called vacuum-metal deposition. (Abstract shortened by UMI.).
Summary of Optimization Techniques That Can Be Applied to Suspension System Design
DOT National Transportation Integrated Search
1973-03-01
Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...
Erva, Rajeswara Reddy; Goswami, Ajgebi Nath; Suman, Priyanka; Vedanabhatla, Ravali; Rajulapati, Satish Babu
2017-03-16
The culture conditions and nutritional rations influencing the production of extra cellular antileukemic enzyme by novel Enterobacter aerogenes KCTC2190/MTCC111 were optimized in shake-flask culture. Process variables like pH, temperature, incubation time, carbon and nitrogen sources, inducer concentration, and inoculum size were taken into account. In the present study, finest enzyme activity achieved by traditional one variable at a time method was 7.6 IU/mL which was a 2.6-fold increase compared to the initial value. Further, the L-asparaginase production was optimized using response surface methodology, and validated experimental result at optimized process variables gave 18.35 IU/mL of L-asparaginase activity, which is 2.4-times higher than the traditional optimization approach. The study explored the E. aerogenes MTCC111 as a potent and potential bacterial source for high yield of antileukemic drug.
Lom, Barbara
2012-01-01
The traditional science lecture, where an instructor delivers a carefully crafted monolog to a large audience of students who passively receive the information, has been a popular mode of instruction for centuries. Recent evidence on the science of teaching and learning indicates that learner-centered, active teaching strategies can be more effective learning tools than traditional lectures. Yet most colleges and universities retain lectures as their central instructional method. This article highlights several simple collaborative teaching techniques that can be readily deployed within traditional lecture frameworks to promote active learning. Specifically, this article briefly introduces the techniques of: reader’s theatre, think-pair-share, roundtable, jigsaw, in-class quizzes, and minute papers. Each technique is broadly applicable well beyond neuroscience courses and easily modifiable to serve an instructor’s specific pedagogical goals. The benefits of each technique are described along with specific examples of how each technique might be deployed within a traditional lecture to create more active learning experiences. PMID:23494568
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Optimal Stand Management: Traditional and Neotraditional Solutions
Karen Lee Abt; Jeffrey P. Prestemon
2003-01-01
The traditional Faustmann (1849) model has served as the foundation of economic theory of the firm for the forestry production process. Since its introduction over 150 years ago, many variations of the Faustmann have been developed which relax certain assumptions of the traditional model, including constant prices, risk neutrality, zero production and management costs...
Topology-changing shape optimization with the genetic algorithm
NASA Astrophysics Data System (ADS)
Lamberson, Steven E., Jr.
The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.
Progress technology in microencapsulation methods for cell therapy.
Rabanel, Jean-Michel; Banquy, Xavier; Zouaoui, Hamza; Mokhtar, Mohamed; Hildgen, Patrice
2009-01-01
Cell encapsulation in microcapsules allows the in situ delivery of secreted proteins to treat different pathological conditions. Spherical microcapsules offer optimal surface-to-volume ratio for protein and nutrient diffusion, and thus, cell viability. This technology permits cell survival along with protein secretion activity upon appropriate host stimuli without the deleterious effects of immunosuppressant drugs. Microcapsules can be classified in 3 categories: matrix-core/shell microcapsules, liquid-core/shell microcapsules, and cells-core/shell microcapsules (or conformal coating). Many preparation techniques using natural or synthetic polymers as well as inorganic compounds have been reported. Matrix-core/shell microcapsules in which cells are hydrogel-embedded, exemplified by alginates capsule, is by far the most studied method. Numerous refinement of the technique have been proposed over the years such as better material characterization and purification, improvements in microbead generation methods, and new microbeads coating techniques. Other approaches, based on liquid-core capsules showed improved protein production and increased cell survival. But aside those more traditional techniques, new techniques are emerging in response to shortcomings of existing methods. More recently, direct cell aggregate coating have been proposed to minimize membrane thickness and implants size. Microcapsule performances are largely dictated by the physicochemical properties of the materials and the preparation techniques employed. Despite numerous promising pre-clinical results, at the present time each methods proposed need further improvements before reaching the clinical phase. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.
The patient relationship and therapeutic techniques of the South Sotho traditional healer.
Pinkoane, M G; Greeff, M; Williams, M J S
2005-11-01
Until 1996 the practice of traditional healers was outlawed in South Africa and not afforded a legal position in the community of health care providers. In 1978 the World Health Organization (WHO) identified traditional healers as those people forming an essential core of primary health care workers for rural people in the Third World Countries. However in 1994 the new South African government identified traditional healers as forming an essential element of primary health care workers. It is estimated that 80% of the black population uses traditional medicine because it is deeply rooted in their culture, which is linked to their religion. The traditional healer shares with the patient a world view which is completely alien to biomedical personnel. Therapeutic techniques typically used in traditional healing conflict with the therapeutic techniques used in biomedicine. The patients' perceptions of traditional healing, their needs and expectations, may be the driving force behind their continuous persistence to consult a traditional healer, even after these patients may have sought the therapeutic techniques of biomedical personnel. The operation of both systems in the same society creates a problem to both providers and recipients of health care. Confusion then arises and the consumer consequently chooses the services closer to her. The researcher aimed at investigating the characteristics of the relationship between the traditional healers and the patients, explored the therapeutic techniques that are used in the South Sotho traditional healing process, and investigated the views of both the traditional healers and the patients about the South -Sotho traditional healing process, to facilitate incorporation of the traditional healers in the National Health Care Delivery System. A qualitative research design was followed. Participants were identified by means of a non-probable, purposive voluntary sample. Data was collected by means of a video camera and semi-structured interviews with the six traditional healers and twelve patients, as well as by taking field notes after each session. Data analysis was achieved by means of using a checklist for the video recordings, and decoding was done for the interviews. A co-coder and the researcher analysed the data independently, after which three consensus discussions took place to finalise the analysed data. The researcher made conclusions, identified shortcomings, and made recommendations for application to nursing education, nursing research and nursing practice. The recommendations for nursing are reflected in the form of guidelines for the incorporation of the traditional healers in the National Health Care Delivery System.
Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization
NASA Astrophysics Data System (ADS)
Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.
2016-06-01
Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Denehy, L; Carroll, S; Ntoumenopoulos, G; Jenkins, S
2001-01-01
Physiotherapists use a variety of techniques aimed at improving lung volumes and secretion clearance in patients after surgery. Periodic continuous positive airway pressure (PCPAP) is used to treat patients following elective upper abdominal surgery. However, the optimal method of application has not been identified, more specifically, the dosage of application of PCPAP. The present randomized controlled trial compared the effects of two dosages of PCPAP application and 'traditional' physiotherapy upon functional residual capacity (FRC), vital capacity (VC), oxyhaemoglobin saturation (SpO2), incidence of post-operative pulmonary complications and length of stay with a control group receiving 'traditional' physiotherapy only. Fifty-seven subjects were randomly allocated to one of three groups. All groups received 'traditional' physiotherapy twice daily for a minimum of three post-operative days. In addition, two groups received PCPAP for 15 or 30 minutes, four times per day, for three days. Fifty subjects (39 male; 11 female) completed the study. There were no significant differences in any variables between the three groups. The overall incidence of post-operative pulmonary complications was 22% in the control group, 11% and 6% in the PCPAP 15-minute and PCPAP 30-minute groups, respectively. Length of hospital stay was not significantly different between the groups but for subjects who developed post-operative pulmonary complications, the length of stay was significantly greater (Z = -2.32; p = 0.021). The addition of PCPAP to a traditional physiotherapy post-operative treatment regimen after upper abdominal surgery did not significantly affect physiological or clinical outcomes.
ERIC Educational Resources Information Center
Walsh, Jeffrey A.; Braithwaite, Jeremy
2008-01-01
This work, drawing on the literature on alcohol consumption, sexual behavior, and researching sensitive topics, tests the efficacy of the unmatched-count technique (UCT) in establishing higher rates of truthful self-reporting when compared to traditional survey techniques. Traditional techniques grossly underestimate the scope of problems…
Optimizing area under the ROC curve using semi-supervised learning
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.
2014-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Optimizing area under the ROC curve using semi-supervised learning.
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M
2015-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
[Comparative trial between traditional cesarean section and Misgav-Ladach technique].
Gutiérrez, José Gabriel Tamayo; Coló, José Antonio Sereno; Arreola, María Sandra Huape
2008-02-01
The cesarean section was designed to extract to the neoborn, when the childbirth becomes difficult by the natural routes. The institutional obstetrical work demands long surgical time and high raw materials; therefore, simpler procedures must be implemented. To compare traditional cesarean section vs Misgav-Ladach technique to assess surgical time, and hospital stay and costs. Forty-eight pregnant patients at term with obstetrical indication for cesarean delivery were randomized in two groups: 24 were submitted to traditional cesarean and 24 to Misgav-Ladach technique. The outcomes included surgical time, bleeding, amount of sutures employed, pain intensity and some others adverse effects. The surgical time with Misgav-Ladach technique was shorter compared with traditional cesarean section, bleeding was consistently lesser and pain was also low. None adverse effects were registered in both groups. Although short follow-up showed significant operative time reduction and less bleeding, longer follow-up should be desirable in order to confirm no abdominal adhesions.
Vivekanandan, T; Sriman Narayana Iyengar, N Ch
2017-11-01
Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekar, Venkateswaran; Fiondella, Lance; Chatterjee, Samrat
Transportation networks are critical to the social and economic function of nations. Given the continuing increase in the populations of cities throughout the world, the criticality of transportation infrastructure is expected to increase. Thus, it is ever more important to mitigate congestion as well as to assess the impact disruptions would have on individuals who depend on transportation for their work and livelihood. Moreover, several government organizations are responsible for ensuring transportation networks are available despite the constant threat of natural disasters and terrorist activities. Most of the previous transportation network vulnerability research has been performed in the context ofmore » static traffic models, many of which are formulated as traditional optimization problems. However, transportation networks are dynamic because their usage varies over time. Thus, more appropriate methods to characterize the vulnerability of transportation networks should consider their dynamic properties. This paper presents a quantitative approach to assess the vulnerability of a transportation network to disruptions with methods from traffic simulation. Our approach can prioritize the critical links over time and is generalizable to the case where both link and node disruptions are of concern. We illustrate the approach through a series of examples. Our results demonstrate that the approach provides quantitative insight into the time varying criticality of links. Such an approach could be used as the objective function of less traditional optimization methods that use simulation and other techniques to evaluate the relative utility of a particular network defense to reduce vulnerability and increase resilience.« less
López Martín, M Beatriz; Erice Calvo-Sotelo, Alejo
To compare presurgical hand hygiene with hydroalcoholic solution following the WHO protocol with traditional presurgical hand hygiene. Cultures of the hands of surgeons and surgical nurses were performed before and after presurgical hand hygiene and after removing gloves at the end of surgery. Cultures were done in 2different days: the first day after traditional presurgical hand hygiene, and the second day after presurgical hand hygiene with hydroalcoholic solution following the WHO protocol. The duration of the traditional hand hygiene was measured and compared with the duration (3min) of the WHO protocol. The cost of the products used in the traditional technique was compared with the cost of the hydroalcoholic solution used. The variability of the traditional technique was determined by observation. Following presurgical hand hygiene with hydroalcoholic solution, colony-forming units (CFU) were detected in 5 (7.3%) subjects, whereas after traditional presurgical hand hygiene CFU were detected in 14 subjects (20.5%) (p < 0.05). After glove removal, the numbers of CFU were similar. The time employed in hand hygiene with hydroalcoholic solution (3min) was inferior to the time employed in the traditional technique (p < 0.05), its cost was less than half, and there was no variability. Compared with other techniques, presurgical hand hygiene with hydroalcoholic solution significantly decreases CFU, has similar latency time, a lower cost, and saves time. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Optimization of enhanced coal-bed methane recovery using numerical simulation
NASA Astrophysics Data System (ADS)
Perera, M. S. A.; Ranjith, P. G.; Ranathunga, A. S.; Koay, A. Y. J.; Zhao, J.; Choi, S. K.
2015-02-01
Although the enhanced coal-bed methane (ECBM) recovery process is one of the potential coal bed methane production enhancement techniques, the effectiveness of the process is greatly dependent on the seam and the injecting gas properties. This study has therefore aimed to obtain a comprehensive knowledge of all possible major ECBM process-enhancing techniques by developing a novel 3D numerical model by considering a typical coal seam using the COMET 3 reservoir simulator. Interestingly, according to the results of the model, the generally accepted concept that there is greater CBM (coal-bed methane) production enhancement from CO2 injection, compared to the traditional water removal technique, is true only for high CO2 injection pressures. Generally, the ECBM process can be accelerated by using increased CO2 injection pressures and reduced temperatures, which are mainly related to the coal seam pore space expansion and reduced CO2 adsorption capacity, respectively. The model shows the negative influences of increased coal seam depth and moisture content on ECBM process optimization due to the reduced pore space under these conditions. However, the injection pressure plays a dominant role in the process optimization. Although the addition of a small amount of N2 into the injecting CO2 can greatly enhance the methane production process, the safe N2 percentage in the injection gas should be carefully predetermined as it causes early breakthroughs in CO2 and N2 in the methane production well. An increased number of production wells may not have a significant influence on long-term CH4 production (50 years for the selected coal seam), although it significantly enhances short-term CH4 production (10 years for the selected coal seam). Interestingly, increasing the number of injection and production wells may have a negative influence on CBM production due to the coincidence of pressure contours created by each well and the mixing of injected CO2 with CH4.
Townsend, F I; Ralphs, S C; Coronado, G; Sweet, D C; Ward, J; Bloch, C P
2012-01-01
To compare the hydro-surgical technique to traditional techniques for removal of subcutaneous tissue in the preparation of full-thickness skin grafts. Ex vivo experimental study and a single clinical case report. Four canine cadavers and a single clinical case. Four sections of skin were harvested from the lateral flank of recently euthanatized dogs. Traditional preparation methods used included both a blade or scissors technique, each of which were compared to the hydro-surgical technique individually. Preparation methods were compared based on length of time for removal of the subcutaneous tissue from the graft, histologic grading, and measurable thickness as compared to an untreated sample. The hydro-surgical technique had the shortest skin graft preparation time as compared to traditional techniques (p = 0.002). There was no significant difference in the histological grading or measurable subcutaneous thickness between skin specimens. The hydro-surgical technique provides a rapid, effective debridement of subcutaneous tissue in the preparation of full-thickness skin grafts. There were not any significant changes in histological grade and subcutaneous tissue remaining among all treatment types. Additionally the hydro-surgical technique was successfully used to prepare a full-thickness meshed free skin graft in the reconstruction of a traumatic medial tarsal wound in a dog.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
Shao, Shi-Cheng; Burgess, Kevin S.; Cruse-Sanders, Jennifer M.; Liu, Qiang; Fan, Xu-Li; Huang, Hui; Gao, Jiang-Yun
2017-01-01
Due to increasing demand for medicinal and horticultural uses, the Orchidaceae is in urgent need of innovative and novel propagation techniques that address both market demand and conservation. Traditionally, restoration techniques have been centered on ex situ asymbiotic or symbiotic seed germination techniques that are not cost-effective, have limited genetic potential and often result in low survival rates in the field. Here, we propose a novel in situ advanced restoration-friendly program for the endangered epiphytic orchid species Dendrobium devonianum, in which a series of in situ symbiotic seed germination trials base on conspecific fungal isolates were conducted at two sites in Yunnan Province, China. We found that percentage germination varied among treatments and locations; control treatments (no inoculum) did not germinate at both sites. We found that the optimal treatment, having the highest in situ seed germination rate (0.94-1.44%) with no significant variation among sites, supported a warm, moist and fixed site that allowed for light penetration. When accounting for seed density, percentage germination was highest (2.78-2.35%) at low densities and did not vary among locations for the treatment that supported optimal conditions. Similarly for the same treatment, seed germination ranged from 0.24 to 5.87% among seasons but also did vary among sites. This study reports on the cultivation and restoration of an endangered epiphytic orchid species by in situ symbiotic seed germination and is likely to have broad application to the horticulture and conservation of the Orchidaceae. PMID:28638388
Role of direct bioautographic method for detection of antistaphylococcal activity of essential oils.
Horváth, Györgyi; Jámbor, Noémi; Kocsis, Erika; Böszörményi, Andrea; Lemberkovics, Eva; Héthelyi, Eva; Kovács, Krisztina; Kocsis, Béla
2011-09-01
The aim of the present study was the chemical characterization of some traditionally used and therapeutically relevant essential oils (thyme, eucalyptus, cinnamon bark, clove, and tea tree) and the optimized microbiological investigation of the effect of these oils on clinical isolates of methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-susceptible S. aureus (MSSA). The chemical composition of the oils was analyzed by TLC, and controlled by gas chromatography (GC) and gas chromatography/mass spectrometry (GC/MS). The antibacterial effect was investigated using a TLC-bioautographic method. Antibacterial activity of thyme, clove and cinnamon oils, as well as their main components (thymol, carvacrol, eugenol, and cinnamic aldehyde) was observed against all the bacterial strains used in this study. The essential oils of eucalyptus and tea tree showed weak activity in the bioautographic system. On the whole, the antibacterial activity of the essential oils could be related to their most abundant components, but the effect of the minor components should also be taken into consideration. Direct bioautography is more cost-effective and better in comparison with traditional microbiological laboratory methods (e.g. disc-diffusion, agar-plate technique).
Sampling limits for electron tomography with sparsity-exploiting reconstructions.
Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A
2018-03-01
Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Yang, Yu; Strickland, Zackary; Kapalavavi, Brahmam; Marple, Ronita; Gamsky, Chris
2011-03-15
In this work, chromatographic separation of niacin and niacinamide using pure water as the sole component in the mobile phase has been investigated. The separation and analysis of niacinamide have been optimized using three columns at different temperatures and various flow rates. Our results clearly demonstrate that separation and analysis of niacinamide from skincare products can be achieved using pure water as the eluent at 60°C on a Waters XTerra MS C18 column, a Waters XBridge C18 column, or at 80°C on a Hamilton PRP-1 column. The separation efficiency, quantification quality, and analysis time of this new method are at least comparable with those of the traditional HPLC methods. Compared with traditional HPLC, the major advantage of this newly developed green chromatography technique is the elimination of organic solvents required in the HPLC mobile phase. In addition, the pure water chromatography separations described in this work can be directly applied in industrial plant settings without further modification of the existing HPLC equipment. Copyright © 2011 Elsevier B.V. All rights reserved.
Broadband metasurface holograms: toward complete phase and amplitude engineering
Wang, Qiu; Zhang, Xueqian; Xu, Yuehong; Gu, Jianqiang; Li, Yanfeng; Tian, Zhen; Singh, Ranjan; Zhang, Shuang; Han, Jiaguang; Zhang, Weili
2016-01-01
As a revolutionary three-dimensional imaging technique, holography has attracted wide attention for its ability to photographically record a light field. However, traditional phase-only or amplitude-only modulation holograms have limited image quality and resolution to reappear both amplitude and phase information required of the objects. Recent advances in metasurfaces have shown tremendous opportunities for using a planar design of artificial meta-atoms to shape the wave front of light by optimal control of both its phase and amplitude. Inspired by the concept of designer metasurfaces, we demonstrate a novel amplitude-phase modulation hologram with simultaneous five-level amplitude modulation and eight-level phase modulation. Such a design approach seeks to turn the perceived disadvantages of the traditional phase or amplitude holograms, and thus enable enhanced performance in resolution, homogeneity of amplitude distribution, precision, and signal-to-noise ratio. In particular, the unique holographic approach exhibits broadband characteristics. The method introduced here delivers more degrees of freedom, and allows for encoding highly complex information into designer metasurfaces, thus having the potential to drive next-generation technological breakthroughs in holography. PMID:27615519
Broadband metasurface holograms: toward complete phase and amplitude engineering.
Wang, Qiu; Zhang, Xueqian; Xu, Yuehong; Gu, Jianqiang; Li, Yanfeng; Tian, Zhen; Singh, Ranjan; Zhang, Shuang; Han, Jiaguang; Zhang, Weili
2016-09-12
As a revolutionary three-dimensional imaging technique, holography has attracted wide attention for its ability to photographically record a light field. However, traditional phase-only or amplitude-only modulation holograms have limited image quality and resolution to reappear both amplitude and phase information required of the objects. Recent advances in metasurfaces have shown tremendous opportunities for using a planar design of artificial meta-atoms to shape the wave front of light by optimal control of both its phase and amplitude. Inspired by the concept of designer metasurfaces, we demonstrate a novel amplitude-phase modulation hologram with simultaneous five-level amplitude modulation and eight-level phase modulation. Such a design approach seeks to turn the perceived disadvantages of the traditional phase or amplitude holograms, and thus enable enhanced performance in resolution, homogeneity of amplitude distribution, precision, and signal-to-noise ratio. In particular, the unique holographic approach exhibits broadband characteristics. The method introduced here delivers more degrees of freedom, and allows for encoding highly complex information into designer metasurfaces, thus having the potential to drive next-generation technological breakthroughs in holography.
Majer-Baranyi, Krisztina; Zalán, Zsolt; Mörtl, Mária; Juracsek, Judit; Szendrő, István; Székács, András; Adányi, Nóra
2016-11-15
Optical waveguide lightmode spectroscopy (OWLS) technique has been applied to label-free detection of aflatoxin B1 in a competitive immunoassay format, with the aim to compare the analytical goodness of the developed OWLS immunosenor with HPLC and enzyme-linked immunosorbent assay (ELISA) methods for the detection of aflatoxin in spice paprika matrix. We have also assessed applicability of the QuEChERS method prior to ELISA measurements, and the results were compared to those obtained by traditional solvent extraction followed by immunoaffinity clean-up. The AFB1 content of sixty commercial spice paprika samples from different countries were measured with the developed and optimized OWLS immunosensor. Comparing the results from the indirect immunosensor to that obtained by HPLC or ELISA provided excellent correlation (with regression coefficients above 0.94) indicating that the competitive OWLS immunosensor has a potential for quick determination of aflatoxin B1 in paprika samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring a Multiphysics Resolution Approach for Additive Manufacturing
NASA Astrophysics Data System (ADS)
Estupinan Donoso, Alvaro Antonio; Peters, Bernhard
2018-06-01
Metal additive manufacturing (AM) is a fast-evolving technology aiming to efficiently produce complex parts while saving resources. Worldwide, active research is being performed to solve the existing challenges of this growing technique. Constant computational advances have enabled multiscale and multiphysics numerical tools that complement the traditional physical experimentation. In this contribution, an advanced discrete-continuous concept is proposed to address the physical phenomena involved during laser powder bed fusion. The concept treats powder as discrete by the extended discrete element method, which predicts the thermodynamic state and phase change for each particle. The fluid surrounding is solved with multiphase computational fluid dynamics techniques to determine momentum, heat, gas and liquid transfer. Thus, results track the positions and thermochemical history of individual particles in conjunction with the prevailing fluid phases' temperature and composition. It is believed that this methodology can be employed to complement experimental research by analysis of the comprehensive results, which can be extracted from it to enable AM processes optimization for parts qualification.
Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Jingfang, Huang
2008-01-01
In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less
NASA Astrophysics Data System (ADS)
Milani, Gabriele; Shehu, Rafael; Valente, Marco
2017-11-01
This paper investigates the effectiveness of reducing the seismic vulnerability of masonry towers by means of innovative and traditional strengthening techniques. The followed strategy for providing the optimal retrofitting for masonry towers subjected to seismic risk relies on preventing active failure mechanisms. These vulnerable mechanisms are pre-assigned failure patterns based on the crack patterns experienced during the past seismic events. An upper bound limit analysis strategy is found suitable to be applied for simplified tower models in their present state and the proposed retrofitted ones. Taking into consideration the variability of geometrical features and the uncertainty of the strengthening techniques, Monte Carlo simulations are implemented into the limit analysis. In this framework a wide range of idealized cases are covered by the conducted analyses. The retrofitting strategies aim to increase the shear strength and the overturning load carrying capacity in order to reduce vulnerability. This methodology gives the possibility to use different materials which can fulfill the structural implementability requirements.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
NASA Astrophysics Data System (ADS)
Palma, K. D.; Pichotka, M.; Hasn, S.; Granja, C.
2017-02-01
In mammography the difficult task to detect microcalcifications (≈ 100 μm) and low contrast structures in the breast has been a topic of interest from its beginnings. The possibility to improve the image quality requires the effort to employ novel X-ray imaging techniques, such as phase-contrast, and high resolution detectors. Phase-contrast techniques are promising tools for medical diagnosis because they provide additional and complementary information to traditional absorption-based X-ray imaging methods. In this work a Hamamatsu microfocus X-ray source with tungsten anode and a photon counting detector (Timepix operated in Medipix mode) was used. A significant improvement in the detection of phase-effects using Medipix detector was observed in comparison to an standard flat-panel detector. An optimization of geometrical parameters reveals the dependency on the X-ray propagation path and the small angle deviation. The quantification of these effects was achieved taking into account the image noise, contrast, spatial resolution of the phase-enhancement, absorbed dose, and energy dependence.
NASA Astrophysics Data System (ADS)
Young, John Paul
The low density and high strength to weight ratio of magnesium alloys makes them ideal candidates to replace many of the heavier steel and aluminum alloys currently used in the automotive and other industries. Although cast magnesium alloys components have a long history of use in the automotive industry, the integration of wrought magnesium alloys components has been hindered by a number of factors. Grain refinement through thermomechanical processing offers a possible solution to many of the inherent problems associated with magnesium alloys. This work explores the development of several thermomechanical processing techniques and investigates their impact on the microstructural and mechanical properties of magnesium alloys. In addition to traditional thermomechanical processing, this work includes the development of new severe plastic deformation techniques for the production of fine grain magnesium plate and pipe and develops a procedure by which the thermal microstructural stability of severely plastically deformed microstructures can be assessed.
ERIC Educational Resources Information Center
Gosetti-Murrayjohn, Angela; Schneider, Federico
2009-01-01
This article provides a reflection on a team-teaching experience in which performative dialogues between co-instructors and among students provided a pedagogical framework within which comparative analysis of textual traditions within the classical tradition could be optimized. Performative dialogues thus provided a model for and enactment of…
Educational Tool for Optimal Controller Tuning Using Evolutionary Strategies
ERIC Educational Resources Information Center
Carmona Morales, D.; Jimenez-Hornero, J. E.; Vazquez, F.; Morilla, F.
2012-01-01
In this paper, an optimal tuning tool is presented for control structures based on multivariable proportional-integral-derivative (PID) control, using genetic algorithms as an alternative to traditional optimization algorithms. From an educational point of view, this tool provides students with the necessary means to consolidate their knowledge on…
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Wang, Junlong; Zhang, Ji; Wang, Xiaofang; Zhao, Baotang; Wu, Yiqian; Yao, Jian
2009-12-01
The conventional extraction methods for polysaccharides were time-consuming, laborious and energy-consuming. Microwave-assisted extraction (MAE) technique was employed for the extraction of Artemisia sphaerocephala polysaccharides (ASP), which is a traditional Chinese food. The extracting parameters were optimized by Box-Behnken design. In microwave heating process, a decrease in molecular weight (M(w)) was detected in SEC-LLS measurement. A d(f) value of 2.85 indicated ASP using MAE exhibited as a sphere conformation of branched clusters in aqueous solution. Furthermore, it showed stronger antioxidant activities compared with hot water extraction. The data obtained showed that the molecular weights played a more important role in antioxidant activities.
Gearing up to the factory of the future
NASA Astrophysics Data System (ADS)
Godfrey, D. E.
1985-01-01
The features of factories and manufacturing techniques and tools of the near future are discussed. The spur to incorporate new technologies on the factory floor will originate in management, who must guide the interfacing of computer-enhanced equipment with traditional manpower, materials and machines. Electronic control with responsiveness and flexibility will be the key concept in an integrated approach to processing materials. Microprocessor controlled laser and fluid cutters add accuracy to cutting operations. Unattended operation will become feasible when automated inspection is added to a work station through developments in robot vision. Optimum shop management will be achieved through AI programming of parts manufacturing, optimized work flows, and cost accounting. The automation enhancements will allow designers to affect directly parts being produced on the factory floor.
NASA Astrophysics Data System (ADS)
Bednyakova, Anastasia; Turitsyn, Sergei K.
2015-03-01
The key to generating stable optical pulses is mastery of nonlinear light dynamics in laser resonators. Modern techniques to control the buildup of laser pulses are based on nonlinear science and include classical solitons, dissipative solitons, parabolic pulses (similaritons) and various modifications and blending of these methods. Fiber lasers offer remarkable opportunities to apply one-dimensional nonlinear science models for the design and optimization of very practical laser systems. Here, we propose a new concept of a laser based on the adiabatic amplification of a soliton pulse in the cavity—the adiabatic soliton laser. The adiabatic change of the soliton parameters during evolution in the resonator relaxes the restriction on the pulse energy inherent in traditional soliton lasers. Theoretical analysis is confirmed by extensive numerical modeling.
Particle-in-cell numerical simulations of a cylindrical Hall thruster with permanent magnets
NASA Astrophysics Data System (ADS)
Miranda, Rodrigo A.; Martins, Alexandre A.; Ferreira, José L.
2017-10-01
The cylindrical Hall thruster (CHT) is a propulsion device that offers high propellant utilization and performance at smaller dimensions and lower power levels than traditional Hall thrusters. In this paper we present first results of a numerical model of a CHT. This model solves particle and field dynamics self-consistently using a particle-in-cell approach. We describe a number of techniques applied to reduce the execution time of the numerical simulations. The specific impulse and thrust computed from our simulations are in agreement with laboratory experiments. This simplified model will allow for a detailed analysis of different thruster operational parameters and obtain an optimal configuration to be implemented at the Plasma Physics Laboratory at the University of Brasília.
Addressing Climate Change in Long-Term Water Planning Using Robust Decisionmaking
NASA Astrophysics Data System (ADS)
Groves, D. G.; Lempert, R.
2008-12-01
Addressing climate change in long-term natural resource planning is difficult because future management conditions are deeply uncertain and the range of possible adaptation options are so extensive. These conditions pose challenges to standard optimization decision-support techniques. This talk will describe a methodology called Robust Decisionmaking (RDM) that can complement more traditional analytic approaches by utilizing screening-level water management models to evaluate large numbers of strategies against a wide range of plausible future scenarios. The presentation will describe a recent application of the methodology to evaluate climate adaptation strategies for the Inland Empire Utilities Agency in Southern California. This project found that RDM can provide a useful way for addressing climate change uncertainty and identify robust adaptation strategies.
Advancement of X-Ray Microscopy Technology and its Application to Metal Solidification Studies
NASA Technical Reports Server (NTRS)
Kaukler, William F.; Curreri, Peter A.
1996-01-01
The technique of x-ray projection microscopy is being used to view, in real time, the structures and dynamics of the solid-liquid interface during solidification. By employing a hard x-ray source with sub-micron dimensions, resolutions of 2 micrometers can be obtained with magnifications of over 800 X. Specimen growth conditions need to be optimized and the best imaging technologies applied to maintain x-ray image resolution, contrast and sensitivity. It turns out that no single imaging technology offers the best solution and traditional methods like radiographic film cannot be used due to specimen motion (solidification). In addition, a special furnace design is required to permit controlled growth conditions and still offer maximum resolution and image contrast.
Parallax-Robust Surveillance Video Stitching
He, Botao; Yu, Shaohua
2015-01-01
This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. This project describes an innovative, minimally invasive foundation insulation upgrade technique on an existing home that uses hydrovac excavation technology combined with a liquid insulating foam. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features,more » utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
Research on Ratio of Dosage of Drugs in Traditional Chinese Prescriptions by Data Mining.
Yu, Xing-Wen; Gong, Qing-Yue; Hu, Kong-Fa; Mao, Wen-Jing; Zhang, Wei-Ming
2017-01-01
Maximizing the effectiveness of prescriptions and minimizing adverse effects of drugs is a key component of the health care of patients. In the practice of traditional Chinese medicine (TCM), it is important to provide clinicians a reference for dosing of prescribed drugs. The traditional Cheng-Church biclustering algorithm (CC) is optimized and the data of TCM prescription dose is analyzed by using the optimization algorithm. Based on an analysis of 212 prescriptions related to TCM treatment of kidney diseases, the study generated 87 prescription dose quantum matrices and each sub-matrix represents the referential value of the doses of drugs in different recipes. The optimized CC algorithm can effectively eliminate the interference of zero in the original dose matrix of TCM prescriptions and avoid zero appearing in output sub-matrix. This results in the ability to effectively analyze the reference value of drugs in different prescriptions related to kidney diseases, so as to provide valuable reference for clinicians to use drugs rationally.
NASA Astrophysics Data System (ADS)
Ha, Taewoo; Lee, Howon; Sim, Kyung Ik; Kim, Jonghyeon; Jo, Young Chan; Kim, Jae Hoon; Baek, Na Yeon; Kang, Dai-ill; Lee, Han Hyoung
2017-05-01
We have established optimal methods for terahertz time-domain spectroscopic analysis of highly absorbing pigments in powder form based on our investigation of representative traditional Chinese pigments, such as azurite [blue-based color pigment], Chinese vermilion [red-based color pigment], and arsenic yellow [yellow-based color pigment]. To accurately extract the optical constants in the terahertz region of 0.1 - 3 THz, we carried out transmission measurements in such a way that intense absorption peaks did not completely suppress the transmission level. This required preparation of pellet samples with optimized thicknesses and material densities. In some cases, mixing the pigments with polyethylene powder was required to minimize absorption due to certain peak features. The resulting distortion-free terahertz spectra of the investigated set of pigment species exhibited well-defined unique spectral fingerprints. Our study will be useful to future efforts to establish non-destructive analysis methods of traditional pigments, to construct their spectral databases, and to apply these tools to restoration of cultural heritage materials.
NASA Astrophysics Data System (ADS)
Seipel, S.; Yu, J.; Periyasamy, A. P.; Viková, M.; Vik, M.; Nierstrasz, V. A.
2017-10-01
For the development of niche products like smart textiles and other functional high-end products, resource-saving production processes are needed. Niche products only require small batches, which makes their production with traditional textile production techniques time-consuming and costly. To achieve a profitable production, as well as to further foster innovation, flexible and integrated production techniques are a requirement. Both digital inkjet printing and UV-light curing contribute to a flexible, resource-efficient, energy-saving and therewith economic production of smart textiles. In this article, a smart textile UV-sensor is printed using a piezoelectric drop-on-demand printhead and cured with a UV-LED lamp. The UVcurable ink system is based on free radical polymerization and the integrated UVsensing material is a photochromic dye, Reversacol Ruby Red. The combination of two photoactive compounds, for which UV-light is both the curer and the activator, challenges two processes: polymer crosslinking of the resin and color performance of the photochromic dye. Differential scanning calorimetry (DSC) is used to characterize the curing efficiency of the prints. Color measurements are made to determine the influence of degree of polymer crosslinking on the developed color intensities, as well as coloration and decoloration rates of the photochromic prints. Optimized functionality of the textile UV-sensor is found using different belt speeds and lamp intensities during the curing process.
Wilson, C. E.; van Blitterswijk, C. A.; Verbout, A. J.; de Bruijn, J. D.
2010-01-01
Calcium phosphate ceramics, commonly applied as bone graft substitutes, are a natural choice of scaffolding material for bone tissue engineering. Evidence shows that the chemical composition, macroporosity and microporosity of these ceramics influences their behavior as bone graft substitutes and bone tissue engineering scaffolds but little has been done to optimize these parameters. One method of optimization is to place focus on a particular parameter by normalizing the influence, as much as possible, of confounding parameters. This is difficult to accomplish with traditional fabrication techniques. In this study we describe a design based rapid prototyping method of manufacturing scaffolds with virtually identical macroporous architectures from different calcium phosphate ceramic compositions. Beta-tricalcium phosphate, hydroxyapatite (at two sintering temperatures) and biphasic calcium phosphate scaffolds were manufactured. The macro- and micro-architectures of the scaffolds were characterized as well as the influence of the manufacturing method on the chemistries of the calcium phosphate compositions. The structural characteristics of the resulting scaffolds were remarkably similar. The manufacturing process had little influence on the composition of the materials except for the consistent but small addition of, or increase in, a beta-tricalcium phosphate phase. Among other applications, scaffolds produced by the method described provide a means of examining the influence of different calcium phosphate compositions while confidently excluding the influence of the macroporous structure of the scaffolds. PMID:21069558
USDA-ARS?s Scientific Manuscript database
Traditional microbiological techniques for estimating populations of viable bacteria can be laborious and time consuming. The Most Probable Number (MPN) technique is especially tedious as multiple series of tubes must be inoculated at several different dilutions. Recently, an instrument (TEMPOTM) ...
Organizational Decision Making
1975-08-01
the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
NASA Astrophysics Data System (ADS)
Tyson, Eric J.; Buckley, James; Franklin, Mark A.; Chamberlain, Roger D.
2008-10-01
The imaging atmospheric Cherenkov technique for high-energy gamma-ray astronomy is emerging as an important new technique for studying the high energy universe. Current experiments have data rates of ≈20TB/year and duty cycles of about 10%. In the future, more sensitive experiments may produce up to 1000 TB/year. The data analysis task for these experiments requires keeping up with this data rate in close to real-time. Such data analysis is a classic example of a streaming application with very high performance requirements. This class of application often benefits greatly from the use of non-traditional approaches for computation including using special purpose hardware (FPGAs and ASICs), or sophisticated parallel processing techniques. However, designing, debugging, and deploying to these architectures is difficult and thus they are not widely used by the astrophysics community. This paper presents the Auto-Pipe design toolset that has been developed to address many of the difficulties in taking advantage of complex streaming computer architectures for such applications. Auto-Pipe incorporates a high-level coordination language, functional and performance simulation tools, and the ability to deploy applications to sophisticated architectures. Using the Auto-Pipe toolset, we have implemented the front-end portion of an imaging Cherenkov data analysis application, suitable for real-time or offline analysis. The application operates on data from the VERITAS experiment, and shows how Auto-Pipe can greatly ease performance optimization and application deployment of a wide variety of platforms. We demonstrate a performance improvement over a traditional software approach of 32x using an FPGA solution and 3.6x using a multiprocessor based solution.
Wong, Alex K; Davis, Gabrielle B; Nguyen, T JoAnna; Hui, Kenneth J W S; Hwang, Brian H; Chan, Linda S; Zhou, Zhao; Schooler, Wesley G; Chandrasekhar, Bala S; Urata, Mark M
2014-07-01
Traditional visualization techniques in microsurgery require strict positioning in order to maintain the field of visualization. However, static posturing over time may lead to musculoskeletal strain and injury. Three-dimensional high-definition (3DHD) visualization technology may be a useful adjunct to limiting static posturing and improving ergonomics in microsurgery. In this study, we aimed to investigate the benefits of using the 3DHD technology over traditional techniques. A total of 14 volunteers consisting of novice and experienced microsurgeons performed femoral anastomoses on male Sprague-Dawley retired breeder rats using traditional techniques as well as the 3DHD technology and compared the two techniques. Participants subsequently completed a questionnaire regarding their preference in terms of operational parameters, ergonomics, overall quality, and educational benefits. Efficiency was also evaluated by mean times to complete the anastomosis with each technique. A total of 27 anastomoses were performed, 14 of 14 using the traditional microscope and 13 of 14 using the 3DHD technology. Preference toward the traditional modality was noted with respect to the parameters of precision, field adjustments, zoom and focus, depth perception, and overall quality. The 3DHD technique was preferred for improved stamina and less back and eye strain. Participants believed that the 3DHD technique was the better method for learning microsurgery. Longer mean time of anastomosis completion was noted in participants utilizing the 3DHD technique. The 3DHD technology may prove to be valuable in improving proper ergonomics in microsurgery. In addition, it may be useful in medical education when applied to the learning of new microsurgical skills. More studies are warranted to determine its efficacy and safety in a clinical setting. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Basheti, Iman A; Reddel, Helen K; Armour, Carol L; Bosnic-Anticevich, Sinthia Z
2005-05-01
Optimal effects of asthma medications are dependent on correct inhaler technique. In a telephone survey, 77/87 patients reported that their Turbuhaler technique had not been checked by a health care professional. In a subsequent pilot study, 26 patients were randomized to receive one of 3 Turbuhaler counseling techniques, administered in the community pharmacy. Turbuhaler technique was scored before and 2 weeks after counseling (optimal technique = score 9/9). At baseline, 0/26 patients had optimal technique. After 2 weeks, optimal technique was achieved by 0/7 patients receiving standard verbal counseling (A), 2/8 receiving verbal counseling augmented with emphasis on Turbuhaler position during priming (B), and 7/9 receiving augmented verbal counseling plus physical demonstration (C) (Fisher's exact test for A vs C, p = 0.006). Satisfactory technique (4 essential steps correct) also improved (A: 3/8 to 4/7; B: 2/9 to 5/8; and C: 1/9 to 9/9 patients) (A vs C, p = 0.1). Counseling in Turbuhaler use represents an important opportunity for community pharmacists to improve asthma management, but physical demonstration appears to be an important component to effective Turbuhaler training for educating patients toward optimal Turbuhaler technique.
A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.
Sun, Tao; Xu, Ming-Hai
2017-01-01
Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.
NASA Astrophysics Data System (ADS)
Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing
2017-07-01
Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.
USDA-ARS?s Scientific Manuscript database
Recently, an instrument (TEMPOTM) has been developed to automate the Most Probable Number (MPN) technique and reduce the effort required to estimate some bacterial populations. We compared the automated MPN technique to traditional microbiological plating methods or PetrifilmTM for estimating the t...
Westermann, Robert W; DeBerardino, Thomas; Amendola, Annunziato
2014-01-01
Introduction The High Tibial Osteotomy (HTO) is a reliable procedure in addressing uni- compartmental arthritis with associated coronal deformities. With osteotomy of the proximal tibia, there is a risk of altering the tibial slope in the sagittal plane. Surgical techniques continue to evolve with trends towards procedure reproducibility and simplification. We evaluated a modification of the Arthrex iBalance technique in 18 paired cadaveric knees with the goals of maintaining sagittal slope, increasing procedure efficiency, and decreasing use of intraoperative fluoroscopy. Methods Nine paired cadaveric knees (18 legs) underwent iBalance medial opening wedge high tibial osteotomies. In each pair, the right knee underwent an HTO using the modified technique, while all left knees underwent the traditional technique. Independent observers evaluated postoperative factors including tibial slope, placement of hinge pin, and implant placement. Specimens were then dissected to evaluate for any gross muscle, nerve or vessel injury. Results Changes to posterior tibial slope were similar using each technique. The change in slope in traditional iBalance technique was -0.3° ±2.3° and change in tibial slope using the modified iBalance technique was -0.4° ±2.3° (p=0.29). Furthermore, we detected no differences in posterior tibial slope between preoperative and postoperative specimens (p=0.74 traditional, p=0.75 modified). No differences in implant placement were detected between traditional and modified techniques. (p=0.85). No intraoperative iatrogenic complications (i.e. lateral cortex fracture, blood vessel or nerve injury) were observed in either group after gross dissection. Discussion & Conclusions Alterations in posterior tibial slope are associated with HTOs. Both traditional and modified iBalance techniques appear reliable in coronal plane corrections without changing posterior tibial slope. The present modification of the Arthrex iBalance technique may increase the efficiency of the operation and decrease radiation exposure to patients without compromising implant placement or global knee alignment. PMID:25328454
Energy efficiency analysis and optimization for mobile platforms
NASA Astrophysics Data System (ADS)
Metri, Grace Camille
The introduction of mobile devices changed the landscape of computing. Gradually, these devices are replacing traditional personal computer (PCs) to become the devices of choice for entertainment, connectivity, and productivity. There are currently at least 45.5 million people in the United States who own a mobile device, and that number is expected to increase to 1.5 billion by 2015. Users of mobile devices expect and mandate that their mobile devices have maximized performance while consuming minimal possible power. However, due to the battery size constraints, the amount of energy stored in these devices is limited and is only growing by 5% annually. As a result, we focused in this dissertation on energy efficiency analysis and optimization for mobile platforms. We specifically developed SoftPowerMon, a tool that can power profile Android platforms in order to expose the power consumption behavior of the CPU. We also performed an extensive set of case studies in order to determine energy inefficiencies of mobile applications. Through our case studies, we were able to propose optimization techniques in order to increase the energy efficiency of mobile devices and proposed guidelines for energy-efficient application development. In addition, we developed BatteryExtender, an adaptive user-guided tool for power management of mobile devices. The tool enables users to extend battery life on demand for a specific duration until a particular task is completed. Moreover, we examined the power consumption of System-on-Chips (SoCs) and observed the impact on the energy efficiency in the event of offloading tasks from the CPU to the specialized custom engines. Based on our case studies, we were able to demonstrate that current software-based power profiling techniques for SoCs can have an error rate close to 12%, which needs to be addressed in order to be able to optimize the energy consumption of the SoC. Finally, we summarize our contributions and outline possible direction for future research in this field.
Remediation System Design Optimization: Field Demonstration at the Umatilla Army Deport
NASA Astrophysics Data System (ADS)
Zheng, C.; Wang, P. P.
2002-05-01
Since the early 1980s, many researchers have shown that the simulation-optimization (S/O) approach is superior to the traditional trial-and-error method for designing cost-effective groundwater pump-and-treat systems. However, the application of the S/O approach to real field problems has remained limited. This paper describes the application of a new general simulation-optimization code to optimize an existing pump-and-treat system at the Umatilla Army Depot in Oregon, as part of a field demonstration project supported by the Environmental Security Technology Certification Program (ESTCP). Two optimization formulations were developed to minimize the total capital and operational costs under the current and possibly expanded treatment plant capacities. A third formulation was developed to minimize the total contaminant mass of RDX and TNT remaining in the shallow aquifer by the end of the project duration. For the first two formulations, this study produced an optimal pumping strategy that would achieve the cleanup goal in 4 years with a total cost of 1.66 million US dollars in net present value. For comparison, the existing design in operation was calculated to require 17 years for cleanup with a total cost of 3.83 million US dollars in net present value. Thus, the optimal pumping strategy represents a reduction of 13 years in cleanup time and a reduction of 56.6 percent in the expected total expenditure. For the third formulation, this study identified an optimal dynamic pumping strategy that would reduce the total mass remaining in the shallow aquifer by 89.5 percent compared with that calculated for the existing design. In spite of their intensive computational requirements, this study shows that the global optimization techniques including tabu search and genetic algorithms can be applied successfully to large-scale field problems involving multiple contaminants and complex hydrogeological conditions.
GROUND WATER MONITORING AND SAMPLING: MULTI-LEVEL VERSUS TRADITIONAL METHODS – WHAT’S WHAT?
Recent studies have been conducted to evaluate different sampling techniques for determining VOC concentrations in groundwater. Samples were obtained using multi-level and traditional sampling techniques in three monitoring wells at the Raymark Superfund site in Stratford, CT. Ve...
Onay, Ulaş; Akpınar, Sercan; Akgün, Rahmi Can; Balçık, Cenk; Tuncay, Ismail Cengiz
2013-01-01
The aim of this study was to compare new knotless single-row and double-row suture anchor techniques with traditional transosseous suture techniques for different sized rotator cuff tears in an animal model. The study included 56 cadaveric sheep shoulders. Supraspinatus cuff tears of 1 cm repaired with new knotless single-row suture anchor technique and supraspinatus and infraspinatus rotator cuff tears of 3 cm repaired with double-row suture anchor technique were compared to traditional transosseous suture techniques and control groups. The repaired tendons were loaded with 5 mm/min static velocity with 2.5 kgN load cell in Instron 8874 machine until the repair failure. The 1 cm transosseous group was statistically superior to 1 cm control group (p=0.021, p<0.05) and the 3 cm SpeedBridge group was statistically superior to the 1 cm SpeedFix group (p=0.012, p<0.05). The differences between the other groups were not statistically significant. No significant difference was found between the new knotless suture anchor techniques and traditional transosseous suture techniques.
Agustini, Deonir; Bergamini, Márcio F; Marcolino-Junior, Luiz Humberto
2017-01-25
The micro flow injection analysis (μFIA) is a powerful technique that uses the principles of traditional flow analysis in a microfluidic device and brings a number of improvements related to the consumption of reagents and samples, speed of analysis and portability. However, the complexity and cost of manufacturing processes, difficulty in integrating micropumps and the limited performance of systems employing passive pumps are challenges that must be overcome. Here, we present the characterization and optimization of a low cost device based on cotton threads as microfluidic channel to perform μFIA based on passive pumps with good analytical performance in a simple, easy and inexpensive way. The transport of solutions is made through cotton threads by capillary force facilitated by gravity. After studying and optimizing several features related to the device, were obtained a flow rate of 2.2 ± 0.1 μL s -1 , an analytical frequency of 208 injections per hour, a sample injection volume of 2.0 μL and a waste volume of approximately 40 μL per analysis. For chronoamperometric determination of naproxen, a detection limit of 0.29 μmol L -1 was reached, with a relative standard deviation (RSD) of 1.69% between injections and a RSD of 3.79% with five different devices. Thus, based on the performance presented by proposed microfluidic device, it is possible to overcome some limitations of the μFIA systems based on passive pumps and allow expansion in the use of this technique. Copyright © 2016 Elsevier B.V. All rights reserved.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.
2015-12-01
The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
A figure-of-merit approach to extraterrestrial resource utilization
NASA Technical Reports Server (NTRS)
Ramohalli, K.; Kirsch, T.
1990-01-01
A concept is developed for interrelated optimizations in space missions that utilize extraterrestrial resources. It is shown that isolated (component) optimizations may not result in the best mission. It is shown that substantial benefits can be had through less than the best propellants, propellant combinations, propulsion hardware, and actually, some waste in the traditional sense. One ready example is the possibility of discarding hydrogen produced extraterrestrially by water splitting and using only the oxygen to burn storable fuels. The gains in refrigeration and leak-proof equipment mass (elimination) outweigh the loss in specific impulse. After a brief discussion of this concept, the synthesis of the four major components of any future space mission is developed. The four components are: orbital mechanics of the transportation; performance of the rocket motor; support systems that include power; thermal and process controls, and instruments; and in situ resource utilization plant equipment. This paper's main aim is to develop the concept of a figure-of-merit for the mission. The Mars Sample Return Mission is used to illustrate the new concept. At this time, a popular spreadsheet is used to quantitatively indicate the interdependent nature of the mission optimization. Future prospects are outlined that promise great economy through extraterrestrial resource utilization and a technique for quickly evaluating the same.
Adaptive control for solar energy based DC microgrid system development
NASA Astrophysics Data System (ADS)
Zhang, Qinhao
During the upgrading of current electric power grid, it is expected to develop smarter, more robust and more reliable power systems integrated with distributed generations. To realize these objectives, traditional control techniques are no longer effective in either stabilizing systems or delivering optimal and robust performances. Therefore, development of advanced control methods has received increasing attention in power engineering. This work addresses two specific problems in the control of solar panel based microgrid systems. First, a new control scheme is proposed for the microgrid systems to achieve optimal energy conversion ratio in the solar panels. The control system can optimize the efficiency of the maximum power point tracking (MPPT) algorithm by implementing two layers of adaptive control. Such a hierarchical control architecture has greatly improved the system performance, which is validated through both mathematical analysis and computer simulation. Second, in the development of the microgrid transmission system, the issues related to the tele-communication delay and constant power load (CPL)'s negative incremental impedance are investigated. A reference model based method is proposed for pole and zero placements that address the challenges of the time delay and CPL in closed-loop control. The effectiveness of the proposed modeling and control design methods are demonstrated in a simulation testbed. Practical aspects of the proposed methods for general microgrid systems are also discussed.
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; ...
2016-07-08
Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
2015-03-01
Through the use of a sophisticated modeling technique, investigators at the University of Cincinnati have found that the creation of a so-called "flex track" that includes beds that can be assigned to either high-acuity or Iow-acuity- patients has the potential to lower mean wait times for patients when it is i added to the traditional fast-track and high-acuity areas of a 50-bed ED that sees 85,000 patients per year. Investigators used discrete-event simulation to model the patient flow and characteristics of the ED at the University of Cincinnati Medical Center, and to test out various operational scenarios without disrupting real-world operations. The investigators concluded that patient wait times were lowest when three flex beds were appropriated from the 10-bed fast track area of the EDs. In light of the results, three flex rooms are being incorporated into a newly remodeled ED scheduled for completion laterthis spring. Investigators suggest the modeling technique could be useful to other EDs interested in optimizing their operational plans. Further, they suggest that ED administrators consider ways to introduce flexibility into departments that are now more rigidly divided between high- and low-acuity areas.
Tremsin, Anton S; Gao, Yan; Dial, Laura C; Grazzi, Francesco; Shinohara, Takenao
2016-01-01
Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.
Overview of the Wheat Genetic Transformation and Breeding Status in China.
Han, Jiapeng; Yu, Xiaofen; Chang, Junli; Yang, Guangxiao; He, Guangyuan
2017-01-01
In the past two decades, Chinese scientists have achieved significant progress on three aspects of wheat genetic transformation. First, the wheat transformation platform has been established and optimized to improve the transformation efficiency, shorten the time required from starting of transformation procedure to the fertile transgenic wheat plants obtained as well as to overcome the problem of genotype-dependent for wheat genetic transformation in wide range of wheat elite varieties. Second, with the help of many emerging techniques such as CRISPR/cas9 function of over 100 wheat genes has been investigated. Finally, modern technology has been combined with the traditional breeding technique such as crossing to accelerate the application of wheat transformation. Overall, the wheat end-use quality and the characteristics of wheat stress tolerance have been improved by wheat genetic engineering technique. So far, wheat transgenic lines integrated with quality-improved genes and stress tolerant genes have been on the way of Production Test stage in the field. The debates and the future studies on wheat transformation have been discussed, and the brief summary of Chinese wheat breeding research history has also been provided in this review.
Passive monitoring for near surface void detection using traffic as a seismic source
NASA Astrophysics Data System (ADS)
Zhao, Y.; Kuzma, H. A.; Rector, J.; Nazari, S.
2009-12-01
In this poster we present preliminary results based on our several field experiments in which we study seismic detection of voids using a passive array of surface geophones. The source of seismic excitation is vehicle traffic on nearby roads, which we model as a continuous line source of seismic energy. Our passive seismic technique is based on cross-correlation of surface wave fields and studying the resulting power spectra, looking for "shadows" caused by the scattering effect of a void. High frequency noise masks this effect in the time domain, so it is difficult to see on conventional traces. Our technique does not rely on phase distortions caused by small voids because they are generally too tiny to measure. Unlike traditional impulsive seismic sources which generate highly coherent broadband signals, perfect for resolving phase but too weak for resolving amplitude, vehicle traffic affords a high power signal a frequency range which is optimal for finding shallow structures. Our technique results in clear detections of an abandoned railroad tunnel and a septic tank. The ultimate goal of this project is to develop a technology for the simultaneous imaging of shallow underground structures and traffic monitoring near these structures.
NASA Astrophysics Data System (ADS)
Nguyen, Dam Thuy Trang; Tong, Quang Cong; Ledoux-Rak, Isabelle; Lai, Ngoc Diep
2016-01-01
In this work, local thermal effect induced by a continuous-wave laser has been investigated and exploited to optimize the low one-photon absorption (LOPA) direct laser writing (DLW) technique for fabrication of polymer-based microstructures. It was demonstrated that the temperature of excited SU8 photoresist at the focusing area increases to above 100 °C due to high excitation intensity and becomes stable at that temperature thanks to the use of a continuous-wave laser at 532 nm-wavelength. This optically induced thermal effect immediately completes the crosslinking process at the photopolymerized region, allowing obtain desired structures without using the conventional post-exposure bake (PEB) step, which is usually realized after the exposure. Theoretical calculation of the temperature distribution induced by local optical excitation using finite element method confirmed the experimental results. LOPA-based DLW technique combined with optically induced thermal effect (local PEB) shows great advantages over the traditional PEB, such as simple, short fabrication time, high resolution. In particular, it allowed the overcoming of the accumulation effect inherently existed in optical lithography by one-photon absorption process, resulting in small and uniform structures with very short lattice constant.
NASA Technical Reports Server (NTRS)
Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert
1996-01-01
During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.
NASA Astrophysics Data System (ADS)
Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao
2016-01-01
Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with 100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tremsin, Anton S.; Gao, Yan; Dial, Laura C.
Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less
Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao
2016-01-01
Abstract Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components. PMID:27877885
Predictable repair of provisional restorations.
Hammond, Barry D; Cooper, Jeril R; Lazarchik, David A
2009-01-01
The importance of provisional restorations is often downplayed, as they are thought of by some as only "temporaries." As a result, a less-than-ideal provisional is sometimes fabricated, in part because of the additional chair time required to make provisional modifications when using traditional techniques. Additionally, in many dental practices, these provisional restorations are often fabricated by auxillary personnel who may not be as well trained in the fabrication process. Because provisionals play an important role in achieving the desired final functional and esthetic result, a high-quality provisional restoration is essential to fabricating a successful definitive restoration. This article describes a method for efficiently and predictably repairing both methacrylate and bis-acryl provisional restorations using flowable composite resin. By use of this relatively simple technique, provisional restorations can now be modified or repaired in a timely and productive manner to yield an exceptional result. Successful execution of esthetic and restorative dentistry requires attention to detail in every aspect of the case. Fabrication of high-quality provisional restorations can, at times, be challenging and time consuming. The techniques for optimizing resin provisional restorations as described in this paper are pragmatic and will enhance the delivery of dental treatment.
Simulation-based robust optimization for signal timing and setting.
DOT National Transportation Integrated Search
2009-12-30
The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...
NASA Astrophysics Data System (ADS)
Alegria Mira, Lara; Thrall, Ashley P.; De Temmerman, Niels
2016-02-01
Deployable scissor structures are well equipped for temporary and mobile applications since they are able to change their form and functionality. They are structural mechanisms that transform from a compact state to an expanded, fully deployed configuration. A barrier to the current design and reuse of scissor structures, however, is that they are traditionally designed for a single purpose. Alternatively, a universal scissor component (USC)-a generalized element which can achieve all traditional scissor types-introduces an opportunity for reuse in which the same component can be utilized for different configurations and spans. In this article, the USC is optimized for structural performance. First, an optimized length for the USC is determined based on a trade-off between component weight and structural performance (measured by deflections). Then, topology optimization, using the simulated annealing algorithm, is implemented to determine a minimum weight layout of beams within a single USC component.