Sample records for local optimization techniques

  1. Acceleration techniques in the univariate Lipschitz global optimization

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  2. Optimization of immunohistochemical and fluorescent antibody techniques for localization of foot-and-mouth disease virus in animal tissues

    USDA-ARS?s Scientific Manuscript database

    Immunohistochemical (IHC) and immunofluorescent (IF) techniques were optimized for the detection of foot-and-mouth disease virus (FMDV) structural and non-structural proteins in frozen and paraformaldehyde-fixed paraffin embedded (PFPE) tissues of bovine and porcine origin. Immunohistochemical local...

  3. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  4. Partial discharge localization in power transformers based on the sequential quadratic programming-genetic algorithm adopting acoustic emission techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hua-Long; Liu, Hua-Dong

    2014-10-01

    Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And localization results based on the SQP-GA are compared with some algorithms such as the GA, some other intelligent and non-intelligent algorithms. The results of calculating examples both stimulated and spot experiments demonstrate that the localization method based on the SQP-GA can effectively prevent the results from getting trapped into the local optimum values, and the localization method is of great feasibility and very suitable for the field applications, and the precision of localization is enhanced, and the effectiveness of localization is ideal and satisfactory.

  5. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    NASA Astrophysics Data System (ADS)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  6. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  7. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  8. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  9. Surgical Site Infiltration for Abdominal Surgery: A Novel Neuroanatomical-based Approach

    PubMed Central

    Janis, Jeffrey E.; Haas, Eric M.; Ramshaw, Bruce J.; Nihira, Mikio A.; Dunkin, Brian J.

    2016-01-01

    Background: Provision of optimal postoperative analgesia should facilitate postoperative ambulation and rehabilitation. An optimal multimodal analgesia technique would include the use of nonopioid analgesics, including local/regional analgesic techniques such as surgical site local anesthetic infiltration. This article presents a novel approach to surgical site infiltration techniques for abdominal surgery based upon neuroanatomy. Methods: Literature searches were conducted for studies reporting the neuroanatomical sources of pain after abdominal surgery. Also, studies identified by preceding search were reviewed for relevant publications and manually retrieved. Results: Based on neuroanatomy, an optimal surgical site infiltration technique would consist of systematic, extensive, meticulous administration of local anesthetic into the peritoneum (or preperitoneum), subfascial, and subdermal tissue planes. The volume of local anesthetic would depend on the size of the incision such that 1 to 1.5 mL is injected every 1 to 2 cm of surgical incision per layer. It is best to infiltrate with a 22-gauge, 1.5-inch needle. The needle is inserted approximately 0.5 to 1 cm into the tissue plane, and local anesthetic solution is injected while slowly withdrawing the needle, which should reduce the risk of intravascular injection. Conclusions: Meticulous, systematic, and extensive surgical site local anesthetic infiltration in the various tissue planes including the peritoneal, musculofascial, and subdermal tissues, where pain foci originate, provides excellent postoperative pain relief. This approach should be combined with use of other nonopioid analgesics with opioids reserved for rescue. Further well-designed studies are necessary to assess the analgesic efficacy of the proposed infiltration technique. PMID:28293525

  10. Irreversible electroporation of stage 3 locally advanced pancreatic cancer: optimal technique and outcomes

    PubMed Central

    2015-01-01

    Objective Irreversible electroporation (IRE) of stage 3 pancreatic adenocarcinoma has been used to provide quality of life time in patients who have undergone appropriate induction therapy. The optimal technique has been reported within the literature, but not in video form. IRE of locally advanced pancreatic cancer is technically demanding requiring precision ultrasound use for continuous imaging in multiple needle placements and during IRE energy delivery. Methods Appropriate patients with locally advanced pancreatic cancer should have undergone appropriate induction chemotherapy for a reasonable duration. The safe and effective technique for irreversible electroporation is preformed through an open approach with the emphasis on intra-operative ultrasound and intra-operative electroporation management. Results The technique of open irreversible electroporation of the pancreas involves bracketing the target tumor with IRE probes and any and all invaded vital structures including the celiac axis, superior mesenteric artery (SMA), superior mesenteric-portal vein, and bile duct with continuous intraoperative ultrasound imaging through a caudal to cranial approach. Optimal IRE delivery requires a change in amperage of at least 12 amps from baseline tissue conductivity in order to achieve technical success. Multiple pull-backs are necessary since the IRE ablation probe lengths are 1 cm and thus needed to achieve technical success along the caudal to cranial plane. Conclusions Irreversible electroporation in combination with multi-modality therapy for locally advanced pancreatic carcinoma is feasible for appropriate patients with locally advanced cancer. Technical demands are high and require the highest quality ultrasound for precise spacing measurements and optimal delivery to ensure adequate change in tissue resistance. PMID:29075594

  11. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    PubMed

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  12. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    PubMed Central

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495

  13. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  14. Simulated annealing in orbital flight planning

    NASA Technical Reports Server (NTRS)

    Soller, Jeffrey

    1990-01-01

    Simulated annealing is used to solve a minimum fuel trajectory problem in the space station environment. The environment is unique because the space station will define the first true multivehicle environment in space. The optimization yields surfaces which are potentially complex, with multiple local minima. Because of the likelihood of these local minima, descent techniques are unable to offer robust solutions. Other deterministic optimization techniques were explored without success. The simulated annealing optimization is capable of identifying a minimum-fuel, two-burn trajectory subject to four constraints. Furthermore, the computational efforts involved in the optimization are such that missions could be planned on board the space station. Potential applications could include the on-site planning of rendezvous with a target craft of the emergency rescue of an astronaut. Future research will include multiwaypoint maneuvers, using a knowledge base to guide the optimization.

  15. Optimal Sensor Layouts in Underwater Locomotory Systems

    NASA Astrophysics Data System (ADS)

    Colvert, Brendan; Kanso, Eva

    2015-11-01

    Retrieving and understanding global flow characteristics from local sensory measurements is a challenging but extremely relevant problem in fields such as defense, robotics, and biomimetics. It is an inverse problem in that the goal is to translate local information into global flow properties. In this talk we present techniques for optimization of sensory layouts within the context of an idealized underwater locomotory system. Using techniques from fluid mechanics and control theory, we show that, under certain conditions, local measurements can inform the submerged body about its orientation relative to the ambient flow, and allow it to recognize local properties of shear flows. We conclude by commenting on the relevance of these findings to underwater navigation in engineered systems and live organisms.

  16. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  17. Multilevel decomposition approach to integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1994-01-01

    This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.

  18. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  19. A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan

    2009-01-01

    Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.

  20. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  1. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  2. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  3. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  4. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  5. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  6. Inference of Stochastic Nonlinear Oscillators with Applications to Physiological Problems

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.

    2004-01-01

    A new method of inferencing of coupled stochastic nonlinear oscillators is described. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a broad range of dynamical models. We illustrate the main ideas of the technique by inferencing a model of five globally and locally coupled noisy oscillators. Specific modifications of the technique for inferencing hidden degrees of freedom of coupled nonlinear oscillators is discussed in the context of physiological applications.

  7. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  8. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  9. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  10. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  11. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  12. Product Mix Selection Using AN Evolutionary Technique

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Vasant, Pandian

    2009-08-01

    This paper proposes an evolutionary technique for the solution of a real—life industrial problem and particular for the product mix selection problem. The evolutionary technique is a combination of a genetic algorithm that preserves the feasibility of the trial solutions with penalties and some local optimization method. The goal of this paper has been achieved in finding the best near optimal solution for the profit fitness function respect to vagueness factor and level of satisfaction. The findings of the profit values will be very useful for the decision makers in the industrial engineering sector for the implementation purpose. It's possible to improve the solutions obtained in this study by employing other meta-heuristic methods such as simulated annealing, tabu Search, ant colony optimization, particle swarm optimization and artificial immune systems.

  13. Porous biodegradable lumbar interbody fusion cage design and fabrication using integrated global-local topology optimization with laser sintering.

    PubMed

    Kang, Heesuk; Hollister, Scott J; La Marca, Frank; Park, Paul; Lin, Chia-Ying

    2013-10-01

    Biodegradable cages have received increasing attention for their use in spinal procedures involving interbody fusion to resolve complications associated with the use of nondegradable cages, such as stress shielding and long-term foreign body reaction. However, the relatively weak initial material strength compared to permanent materials and subsequent reduction due to degradation may be problematic. To design a porous biodegradable interbody fusion cage for a preclinical large animal study that can withstand physiological loads while possessing sufficient interconnected porosity for bony bridging and fusion, we developed a multiscale topology optimization technique. Topology optimization at the macroscopic scale provides optimal structural layout that ensures mechanical strength, while optimally designed microstructures, which replace the macroscopic material layout, ensure maximum permeability. Optimally designed cages were fabricated using solid, freeform fabrication of poly(ε-caprolactone) mixed with hydroxyapatite. Compression tests revealed that the yield strength of optimized fusion cages was two times that of typical human lumbar spine loads. Computational analysis further confirmed the mechanical integrity within the human lumbar spine, although the pore structure locally underwent higher stress than yield stress. This optimization technique may be utilized to balance the complex requirements of load-bearing, stress shielding, and interconnected porosity when using biodegradable materials for fusion cages.

  14. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  15. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  16. Identification of inelastic parameters based on deep drawing forming operations using a global-local hybrid Particle Swarm approach

    NASA Astrophysics Data System (ADS)

    Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.

    2016-04-01

    Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.

  17. Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

    NASA Astrophysics Data System (ADS)

    Bonin, J. A.; Chambers, D. P.

    2012-12-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

  18. Localized analysis of paint-coat drying using dynamic speckle interferometry

    NASA Astrophysics Data System (ADS)

    Sierra-Sosa, Daniel; Tebaldi, Myrian; Grumel, Eduardo; Rabal, Hector; Elmaghraby, Adel

    2018-07-01

    The paint-coating is part of several industrial processes, including the automotive industry, architectural coatings, machinery and appliances. These paint-coatings must comply with high quality standards, for this reason evaluation techniques from paint-coatings are in constant development. One important factor from the paint-coating process is the drying, as it has influence on the quality of final results. In this work we present an assessment technique based on the optical dynamic speckle interferometry, this technique allows for the temporal activity evaluation of the paint-coating drying process, providing localized information from drying. This localized information is relevant in order to address the drying homogeneity, optimal drying, and quality control. The technique relies in the definition of a new temporal history of the speckle patterns to obtain the local activity; this information is then clustered to provide a convenient indicative of different drying process stages. The experimental results presented were validated using the gravimetric drying curves

  19. QR images: optimized image embedding in QR codes.

    PubMed

    Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P

    2014-07-01

    This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.

  20. Advanced fitness landscape analysis and the performance of memetic algorithms.

    PubMed

    Merz, Peter

    2004-01-01

    Memetic algorithms (MAs) have demonstrated very effective in combinatorial optimization. This paper offers explanations as to why this is so by investigating the performance of MAs in terms of efficiency and effectiveness. A special class of MAs is used to discuss efficiency and effectiveness for local search and evolutionary meta-search. It is shown that the efficiency of MAs can be increased drastically with the use of domain knowledge. However, effectiveness highly depends on the structure of the problem. As is well-known, identifying this structure is made easier with the notion of fitness landscapes: the local properties of the fitness landscape strongly influence the effectiveness of the local search while the global properties strongly influence the effectiveness of the evolutionary meta-search. This paper also introduces new techniques for analyzing the fitness landscapes of combinatorial problems; these techniques focus on the investigation of random walks in the fitness landscape starting at locally optimal solutions as well as on the escape from the basins of attractions of current local optima. It is shown for NK-landscapes and landscapes of the unconstrained binary quadratic programming problem (BQP) that a random walk to another local optimum can be used to explain the efficiency of recombination in comparison to mutation. Moreover, the paper shows that other aspects like the size of the basins of attractions of local optima are important for the efficiency of MAs and a local search escape analysis is proposed. These simple analysis techniques have several advantages over previously proposed statistical measures and provide valuable insight into the behaviour of MAs on different kinds of landscapes.

  1. Localization of multilayer networks by optimized single-layer rewiring.

    PubMed

    Jalan, Sarika; Pradhan, Priodyuti

    2018-04-01

    We study localization properties of principal eigenvectors (PEVs) of multilayer networks (MNs). Starting with a multilayer network corresponding to a delocalized PEV, we rewire the network edges using an optimization technique such that the PEV of the rewired multilayer network becomes more localized. The framework allows us to scrutinize structural and spectral properties of the networks at various localization points during the rewiring process. We show that rewiring only one layer is enough to attain a MN having a highly localized PEV. Our investigation reveals that a single edge rewiring of the optimized MN can lead to the complete delocalization of a highly localized PEV. This sensitivity in the localization behavior of PEVs is accompanied with the second largest eigenvalue lying very close to the largest one. This observation opens an avenue to gain a deeper insight into the origin of PEV localization of networks. Furthermore, analysis of multilayer networks constructed using real-world social and biological data shows that the localization properties of these real-world multilayer networks are in good agreement with the simulation results for the model multilayer network. This paper is relevant to applications that require understanding propagation of perturbation in multilayer networks.

  2. Localization of multilayer networks by optimized single-layer rewiring

    NASA Astrophysics Data System (ADS)

    Jalan, Sarika; Pradhan, Priodyuti

    2018-04-01

    We study localization properties of principal eigenvectors (PEVs) of multilayer networks (MNs). Starting with a multilayer network corresponding to a delocalized PEV, we rewire the network edges using an optimization technique such that the PEV of the rewired multilayer network becomes more localized. The framework allows us to scrutinize structural and spectral properties of the networks at various localization points during the rewiring process. We show that rewiring only one layer is enough to attain a MN having a highly localized PEV. Our investigation reveals that a single edge rewiring of the optimized MN can lead to the complete delocalization of a highly localized PEV. This sensitivity in the localization behavior of PEVs is accompanied with the second largest eigenvalue lying very close to the largest one. This observation opens an avenue to gain a deeper insight into the origin of PEV localization of networks. Furthermore, analysis of multilayer networks constructed using real-world social and biological data shows that the localization properties of these real-world multilayer networks are in good agreement with the simulation results for the model multilayer network. This paper is relevant to applications that require understanding propagation of perturbation in multilayer networks.

  3. [Local anesthesia in dentistry. Additional techniques].

    PubMed

    Makkes, P C; Bouvy-Berends, E C; Rupreht, J

    1996-05-01

    Patients with insufficient coping abilities can yet be dentally treated by practising several additional techniques. Behaviour management should always be the starting point of all dental treatment strategies. In the end the administration of well selected drugs should lead, via anxiolysis, sedation or general anaesthesia, to a painfree and optimal treatable patient.

  4. Cassette Series Designed for Live-Cell Imaging of Proteins and High Resolution Techniques in Yeast

    PubMed Central

    Young, Carissa L.; Raden, David L.; Caplan, Jeffrey; Czymmek, Kirk; Robinson, Anne S.

    2012-01-01

    During the past decade, it has become clear that protein function and regulation are highly dependent upon intracellular localization. Although fluorescent protein variants are ubiquitously used to monitor protein dynamics, localization, and abundance; fluorescent light microscopy techniques often lack the resolution to explore protein heterogeneity and cellular ultrastructure. Several approaches have been developed to identify, characterize, and monitor the spatial localization of proteins and complexes at the sub-organelle level; yet, many of these techniques have not been applied to yeast. Thus, we have constructed a series of cassettes containing codon-optimized epitope tags, fluorescent protein variants that cover the full spectrum of visible light, a TetCys motif used for FlAsH-based localization, and the first evaluation in yeast of a photoswitchable variant – mEos2 – to monitor discrete subpopulations of proteins via confocal microscopy. This series of modules, complete with six different selection markers, provides the optimal flexibility during live-cell imaging and multicolor labeling in vivo. Furthermore, high-resolution imaging techniques include the yeast-enhanced TetCys motif that is compatible with diaminobenzidine photooxidation used for protein localization by electron microscopy and mEos2 that is ideal for super-resolution microscopy. We have examined the utility of our cassettes by analyzing all probes fused to the C-terminus of Sec61, a polytopic membrane protein of the endoplasmic reticulum of moderate protein concentration, in order to directly compare fluorescent probes, their utility and technical applications. Our series of cassettes expand the repertoire of molecular tools available to advance targeted spatiotemporal investigations using multiple live-cell, super-resolution or electron microscopy imaging techniques. PMID:22473760

  5. Three-Dimensional Microwave Hyperthermia for Breast Cancer Treatment in a Realistic Environment Using Particle Swarm Optimization.

    PubMed

    Nguyen, Phong Thanh; Abbosh, Amin; Crozier, Stuart

    2017-06-01

    In this paper, a technique for noninvasive microwave hyperthermia treatment for breast cancer is presented. In the proposed technique, microwave hyperthermia of patient-specific breast models is implemented using a three-dimensional (3-D) antenna array based on differential beam-steering subarrays to locally raise the temperature of the tumor to therapeutic values while keeping healthy tissue at normal body temperature. This approach is realized by optimizing the excitations (phases and amplitudes) of the antenna elements using the global optimization method particle swarm optimization. The antennae excitation phases are optimized to maximize the power at the tumor, whereas the amplitudes are optimized to accomplish the required temperature at the tumor. During the optimization, the technique ensures that no hotspots exist in healthy tissue. To implement the technique, a combination of linked electromagnetic and thermal analyses using MATLAB and the full-wave electromagnetic simulator is conducted. The technique is tested at 4.2 GHz, which is a compromise between the required power penetration and focusing, in a realistic simulation environment, which is built using a 3-D antenna array of 4 × 6 unidirectional antenna elements. The presented results on very dense 3-D breast models, which have the realistic dielectric and thermal properties, validate the capability of the proposed technique in focusing power at the exact location and volume of tumor even in the challenging cases where tumors are embedded in glands. Moreover, the models indicate the capability of the technique in dealing with tumors at different on- and off-axis locations within the breast with high efficiency in using the microwave power.

  6. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  7. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  8. Locality Aware Concurrent Start for Stencil Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.

    Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less

  9. Complementary Feeding Diets Made of Local Foods Can Be Optimized, but Additional Interventions Will Be Needed to Meet Iron and Zinc Requirements in 6- to 23-Month-Old Children in Low- and Middle-Income Countries.

    PubMed

    Osendarp, Saskia J M; Broersen, Britt; van Liere, Marti J; De-Regil, Luz M; Bahirathan, Lavannya; Klassen, Eva; Neufeld, Lynnette M

    2016-12-01

    The question whether diets composed of local foods can meet recommended nutrient intakes in children aged 6 to 23 months living in low- and middle-income countries is contested. To review evidence of studies evaluating whether (1) macro- and micronutrient requirements of children aged 6 to 23 months from low- and middle-income countries are met by the consumption of locally available foods ("observed intake") and (2) nutrient requirements can be met when the use of local foods is optimized, using modeling techniques ("modeled intake"). Twenty-three articles were included after conducting a systematic literature search. To allow for comparisons between studies, findings of 15 observed intake studies were compared against their contribution to a standardized recommended nutrient intake from complementary foods. For studies with data on intake distribution, %< estimated average requirements were calculated. Data from the observed intake studies indicate that children aged 6 to 23 months meet requirements of protein, while diets are inadequate in calcium, iron, and zinc. Also for energy, vitamin A, thiamin, riboflavin, niacin, folate, and vitamin C, children did not always fulfill their requirements. Very few studies reported on vitamin B6, B12, and magnesium, and no conclusions can be drawn for these nutrients. When diets are optimized using modeling techniques, most of these nutrient requirements can be met, with the exception of iron and zinc and in some settings calcium, folate, and B vitamins. Our findings suggest that optimizing the use of local foods in diets of children aged 6 to 23 months can improve nutrient intakes; however, additional cost-effective strategies are needed to ensure adequate intakes of iron and zinc. © The Author(s) 2016.

  10. A multilevel control system for the large space telescope. [numerical analysis/optimal control

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.

    1975-01-01

    A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.

  11. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  12. Integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades using multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1995-01-01

    This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.

  13. Detection of mitotic nuclei in breast histopathology images using localized ACM and Random Kitchen Sink based classifier.

    PubMed

    Beevi, K Sabeena; Nair, Madhu S; Bindu, G R

    2016-08-01

    The exact measure of mitotic nuclei is a crucial parameter in breast cancer grading and prognosis. This can be achieved by improving the mitotic detection accuracy by careful design of segmentation and classification techniques. In this paper, segmentation of nuclei from breast histopathology images are carried out by Localized Active Contour Model (LACM) utilizing bio-inspired optimization techniques in the detection stage, in order to handle diffused intensities present along object boundaries. Further, the application of a new optimal machine learning algorithm capable of classifying strong non-linear data such as Random Kitchen Sink (RKS), shows improved classification performance. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for MITOS-ATYPIA CONTEST 2014. The proposed framework achieved 95% recall, 98% precision and 96% F-score.

  14. Tapping mode SPM local oxidation nanolithography with sub-10 nm resolution

    NASA Astrophysics Data System (ADS)

    Nishimura, S.; Ogino, T.; Takemura, Y.; Shirakashi, J.

    2008-03-01

    Tapping mode SPM local oxidation nanolithography with sub-10 nm resolution is investigated by optimizing the applied bias voltage (V), scanning speed (S) and the oscillation amplitude of the cantilever (A). We fabricated Si oxide wires with an average width of 9.8 nm (V = 17.5 V, S = 250 nm/s, A = 292 nm). In SPM local oxidation with tapping mode operation, it is possible to decrease the size of the water meniscus by enhancing the oscillation amplitude of cantilever. Hence, it seems that the water meniscus with sub-10 nm dimensions could be formed by precisely optimizing the oxidation conditions. Moreover, we quantitatively explain the size (width and height) of Si oxide wires with a model based on the oxidation ratio, which is defined as the oxidation time divided by the period of the cantilever oscillation. The model allows us to understand the mechanism of local oxidation in tapping mode operation with amplitude modulation. The results imply that the sub-10 nm resolution could be achieved using tapping mode SPM local oxidation technique with the optimization of the cantilever dynamics.

  15. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.

  16. Optimal Window and Lattice in Gabor Transform. Application to Audio Analysis.

    PubMed

    Lachambre, Helene; Ricaud, Benjamin; Stempfel, Guillaume; Torrésani, Bruno; Wiesmeyr, Christoph; Onchis-Moaca, Darian

    2015-01-01

    This article deals with the use of optimal lattice and optimal window in Discrete Gabor Transform computation. In the case of a generalized Gaussian window, extending earlier contributions, we introduce an additional local window adaptation technique for non-stationary signals. We illustrate our approach and the earlier one by addressing three time-frequency analysis problems to show the improvements achieved by the use of optimal lattice and window: close frequencies distinction, frequency estimation and SNR estimation. The results are presented, when possible, with real world audio signals.

  17. FHWA Federal-Aid ITS Procurement Regulations and Contracting Options

    DOT National Transportation Integrated Search

    1997-10-01

    State and local agencies planning to procure Intelligent Transportation Systems (ITS) projects with Federal highway funds face unique challenges. They must choose appropriate contracting techniques that optimize project quality and cost while meeting...

  18. Optimization of immunostaining on flat-mounted human corneas.

    PubMed

    Forest, Fabien; Thuret, Gilles; Gain, Philippe; Dumollard, Jean-Marc; Peoc'h, Michel; Perrache, Chantal; He, Zhiguo

    2015-01-01

    In the literature, immunohistochemistry on cross sections is the main technique used to study protein expression in corneal endothelial cells (ECs), even though this method allows visualization of few ECs, without clear subcellular localization, and is subject to the staining artifacts frequently encountered at tissue borders. We previously proposed several protocols, using fixation in 0.5% paraformaldehyde (PFA) or in methanol, allowing immunostaining on flatmounted corneas for proteins of different cell compartments. In the present study, we further refined the technique by systematically assessing the effect of fixative temperature. Last, we used optimized protocols to further demonstrate the considerable advantages of immunostaining on flatmounted intact corneas: detection of rare cells in large fields of thousands of ECs and epithelial cells, and accurate subcellular localization of given proteins. The staining of four ubiquitous proteins, ZO-1, hnRNP L, actin, and histone H3, with clearly different subcellular localizations, was analyzed in ECs of organ-cultured corneas. Whole intact human corneas were fixed for 30 min in 0.5% paraformaldehyde or pure methanol at four temperatures (4 °C for PFA, -20 °C for methanol, and 23, 37, and 50 °C for both). Experiments were performed in duplicate and repeated on three corneas. Standardized pictures were analyzed independently by two experts. Second, optimized immunostaining protocols were applied to fresh corneas for three applications: identification of rare cells that express KI67 in the endothelium of specimens with Fuch's endothelial corneal dystrophy (FECD), the precise localization of neural cell adhesion molecules (NCAMs) in normal ECs and of the cytokeratin pair K3/12 and CD44 in normal epithelial cells, and the identification of cells that express S100b in the normal epithelium. Temperature strongly influenced immunostaining quality. There was no ubiquitous protocol, but nevertheless, room temperature may be recommended as first-line temperature during fixation, instead of the conventional -20 °C for methanol and 4 °C for PFA. Further optimization may be required for certain target proteins. Optimized protocols allowed description of two previously unknown findings: the presence of a few proliferating ECs in FECD specimens, suggesting ineffective compensatory mechanisms against premature EC death, and the localization of NCAMs exclusively in the lateral membranes of ECs, showing hexagonal organization at the apical pole and an irregular shape with increasing complexity toward the basal pole. Optimized protocols were also effective for the epithelium, allowing clear localization of cytokeratin 3/12 and CD44 in superficial and basal epithelial cells, respectively. Finally, S100b allowed identification of clusters of epithelial Langerhans cells near the limbus and more centrally. Fixative temperature is a crucial parameter in optimizing immunostaining on flatmounted intact corneas. Whole-tissue overview and precise subcellular staining are significant advantages over conventional immunohistochemistry (IHC) on cross sections. This technique, initially developed for the corneal endothelium, proved equally suitable for the corneal epithelium and could be used for other superficial mono- and multilayered epithelia.

  19. Visualizing and improving the robustness of phase retrieval algorithms

    DOE PAGES

    Tripathi, Ashish; Leyffer, Sven; Munson, Todd; ...

    2015-06-01

    Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.

  20. Visualizing and improving the robustness of phase retrieval algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Ashish; Leyffer, Sven; Munson, Todd

    Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.

  1. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  2. Semantic wireless localization of WiFi terminals in smart buildings

    NASA Astrophysics Data System (ADS)

    Ahmadi, H.; Polo, A.; Moriyama, T.; Salucci, M.; Viani, F.

    2016-06-01

    The wireless localization of mobile terminals in indoor scenarios by means of a semantic interpretation of the environment is addressed in this work. A training-less approach based on the real-time calibration of a simple path loss model is proposed which combines (i) the received signal strength information measured by the wireless terminal and (ii) the topological features of the localization domain. A customized evolutionary optimization technique has been designed to estimate the optimal target position that fits the complex wireless indoor propagation and the semantic target-environment relation, as well. The proposed approach is experimentally validated in a real building area where the available WiFi network is opportunistically exploited for data collection. The presented results point out a reduction of the localization error obtained with the introduction of a very simple semantic interpretation of the considered scenario.

  3. Comparison between DCA - SSO - VDR and VMAT dose delivery techniques for 15 SRS/SRT patients

    NASA Astrophysics Data System (ADS)

    Tas, B.; Durmus, I. F.

    2018-02-01

    To evaluate dose delivery between Dynamic Conformal Arc (DCA) - Segment Shape Optimization (SSO) - Variation Dose Rate (VDR) and Volumetric Modulated Arc Therapy (VMAT) techniques for fifteen SRS patients using Versa HD® lineer accelerator. Fifteen SRS / SRT patient's optimum treatment planning were performed using Monaco5.11® treatment planning system (TPS) with 1 coplanar and 3 non-coplanar fields for VMAT technique, then the plans were reoptimized with the same optimization parameters for DCA - SSO - VDR technique. The advantage of DCA - SSO - VDR technique were determined less MUs and beam on time, also larger segments decrease dosimetric uncertainities of small fields quality assurance. The advantage of VMAT technique were determined a little better GI, CI, PCI, brain V12Gy and brain mean dose. The results show that the clinical objectives and plans for both techniques satisfied all organs at risks (OARs) dose constraints. Depends on the shape and localization of target, we could choose one of these techniques for linear accelerator based SRS / SRT treatment.

  4. Multi-Innovation Gradient Iterative Locally Weighted Learning Identification for A Nonlinear Ship Maneuvering System

    NASA Astrophysics Data System (ADS)

    Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan

    2018-06-01

    This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.

  5. Wavefront correction using machine learning methods for single molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan F.; Xu, Jianquan; Kner, Peter

    2015-03-01

    Optical Aberrations are a major challenge in imaging biological samples. In particular, in single molecule localization (SML) microscopy techniques (STORM, PALM, etc.) a high Strehl ratio point spread function (PSF) is necessary to achieve sub-diffraction resolution. Distortions in the PSF shape directly reduce the resolution of SML microscopy. The system aberrations caused by the imperfections in the optics and instruments can be compensated using Adaptive Optics (AO) techniques prior to imaging. However, aberrations caused by the biological sample, both static and dynamic, have to be dealt with in real time. A challenge for wavefront correction in SML microscopy is a robust optimization approach in the presence of noise because of the naturally high fluctuations in photon emission from single molecules. Here we demonstrate particle swarm optimization for real time correction of the wavefront using an intensity independent metric. We show that the particle swarm algorithm converges faster than the genetic algorithm for bright fluorophores.

  6. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  7. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  8. The trust-region self-consistent field method in Kohn-Sham density-functional theory.

    PubMed

    Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve

    2005-08-15

    The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.

  9. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  10. On the utilization of engineering knowledge in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, P.

    1984-01-01

    Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.

  11. Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association

    DTIC Science & Technology

    2011-03-24

    based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently

  12. A method to incorporate leakage and head scatter corrections into a tomotherapy inverse treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Holmes, Timothy W.

    2001-01-01

    A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.

  13. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  14. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  15. Research in Network Management Techniques for Tactical Data Communications Network.

    DTIC Science & Technology

    1982-09-01

    the control period. Research areas include Packet Network modelling, adaptive network routing, network design algorithms, network design techniques...contro!lers are designed to perform their limited tasks optimally. For the dynamic routing problem considered here, the local controllers are node...feedback to finding in optimum stead-o-state routing (static strategies) under non - control which can be easily implemented in real time. congested

  16. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  17. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  18. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  19. JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.

    PubMed

    Lu, Wenmiao; Lu, Yi

    2011-07-01

    Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.

  20. Interplay of superconductivity and magnetic fluctuations in single crystals of BaFe2-xCoxAs2

    NASA Astrophysics Data System (ADS)

    Bag, Biplab; Kumar, Ankit; Banerjee, S. S.; Vinod, K.; Bharathi, A.

    2018-04-01

    We report unusual pinning response in optimally doped and overdoped single crystals of BaFe2-xCoxAs2. Here we use magneto-optical imaging technique to measure the local magnetization response which shows an unusual transformation from low temperature diamagnetic state to high temperature positive magnetization response. Our data suggests coexistence of magnetic fluctuation along with superconductivity in the optimally doped crystal. The strength of magnetic fluctuations is the strongest in the optimally doped compound with the highest Tc.

  1. Comparative evaluation of endodontic pressure syringe, insulin syringe, jiffy tube, and local anesthetic syringe in obturation of primary teeth: An in vitro study.

    PubMed

    Hiremath, Mallayya C; Srivastava, Pooja

    2016-01-01

    The purpose of this in vitro study was to compare four methods of root canal obturation in primary teeth using conventional radiography. A total of 96 root canals of primary molars were prepared and obturated with zinc oxide eugenol. Obturation methods compared were endodontic pressure syringe, insulin syringe, jiffy tube, and local anesthetic syringe. The root canal obturations were evaluated by conventional radiography for the length of obturation and presence of voids. The obtained data were analyzed using Chi-square test. The results showed significant differences between the four groups for the length of obturation (P < 0.05). The endodontic pressure syringe showed the best results (98.5% optimal fillings) and jiffy tube showed the poor results (37.5% optimal fillings) for the length of obturation. The insulin syringe (79.2% optimal fillings) and local anesthetic syringe (66.7% optimal fillings) showed acceptable results for the length of root canal obturation. However, minor voids were present in all the four techniques used. Endodontic pressure syringe produced the best results in terms of length of obturation and controlling paste extrusion from the apical foramen. However, insulin syringe and local anesthetic syringe can be used as effective alternative methods.

  2. An Application of the A* Search to Trajectory Optimization

    DTIC Science & Technology

    1990-05-11

    linearized model of orbital motion called the Clohessy - Wiltshire Equations and a node search technique called A*. The planner discussed in this thesis starts...states while transfer time is left unspecified. 13 Chapter 2. Background HILL’S ( CLOHESSY - WILTSHIRE ) EQUATIONS The Euler-Hill equations describe... Clohessy - Wiltshire equations. The coordinate system used in this thesis is commonly referred to as Local Vertical, Local Horizontal or LVLH reference frame

  3. Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems

    DTIC Science & Technology

    2013-11-01

    development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George

  4. Localization of interictal epileptic spikes with MEG: optimization of an automated beamformer screening method (SAMepi) in a diverse epilepsy population

    PubMed Central

    Scott, Jonathan M.; Robinson, Stephen E.; Holroyd, Tom; Coppola, Richard; Sato, Susumu; Inati, Sara K.

    2016-01-01

    OBJECTIVE To describe and optimize an automated beamforming technique followed by identification of locations with excess kurtosis (g2) for efficient detection and localization of interictal spikes in medically refractory epilepsy patients. METHODS Synthetic Aperture Magnetometry with g2 averaged over a sliding time window (SAMepi) was performed in 7 focal epilepsy patients and 5 healthy volunteers. The effect of varied window lengths on detection of spiking activity was evaluated. RESULTS Sliding window lengths of 0.5–10 seconds performed similarly, with 0.5 and 1 second windows detecting spiking activity in one of the 3 virtual sensor locations with highest kurtosis. These locations were concordant with the region of eventual surgical resection in these 7 patients who remained seizure free at one year. Average g2 values increased with increasing sliding window length in all subjects. In healthy volunteers kurtosis values stabilized in datasets longer than two minutes. CONCLUSIONS SAMepi using g2 averaged over 1 second sliding time windows in datasets of at least 2 minutes duration reliably identified interictal spiking and the presumed seizure focus in these 7 patients. Screening the 5 locations with highest kurtosis values for spiking activity is an efficient and accurate technique for localizing interictal activity using MEG. SIGNIFICANCE SAMepi should be applied using the parameter values and procedure described for optimal detection and localization of interictal spikes. Use of this screening procedure could significantly improve the efficiency of MEG analysis if clinically validated. PMID:27760068

  5. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.

  6. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less

  7. Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Barsoum, N.

    2010-06-01

    In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.

  8. Speeding up local correlation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kats, Daniel

    2014-12-28

    We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.

  9. Multi objective multi refinery optimization with environmental and catastrophic failure effects objectives

    NASA Astrophysics Data System (ADS)

    Khogeer, Ahmed Sirag

    2005-11-01

    Petroleum refining is a capital-intensive business. With stringent environmental regulations on the processing industry and declining refining margins, political instability, increased risk of war and terrorist attacks in which refineries and fuel transportation grids may be targeted, higher pressures are exerted on refiners to optimize performance and find the best combination of feed and processes to produce salable products that meet stricter product specifications, while at the same time meeting refinery supply commitments and of course making profit. This is done through multi objective optimization. For corporate refining companies and at the national level, Intea-Refinery and Inter-Refinery optimization is the second step in optimizing the operation of the whole refining chain as a single system. Most refinery-wide optimization methods do not cover multiple objectives such as minimizing environmental impact, avoiding catastrophic failures, or enhancing product spec upgrade effects. This work starts by carrying out a refinery-wide, single objective optimization, and then moves to multi objective-single refinery optimization. The last step is multi objective-multi refinery optimization, the objectives of which are analysis of the effects of economic, environmental, product spec, strategic, and catastrophic failure. Simulation runs were carried out using both MATLAB and ASPEN PIMS utilizing nonlinear techniques to solve the optimization problem. The results addressed the need to debottleneck some refineries or transportation media in order to meet the demand for essential products under partial or total failure scenarios. They also addressed how importing some high spec products can help recover some of the losses and what is needed in order to accomplish this. In addition, the results showed nonlinear relations among local and global objectives for some refineries. The results demonstrate that refineries can have a local multi objective optimum that does not follow the same trends as either global or local single objective optimums. Catastrophic failure effects on refinery operations and on local objectives are more significant than environmental objective effects, and changes in the capacity or the local objectives follow a discrete behavioral pattern, in contrast to environmental objective cases in which the effects are smoother. (Abstract shortened by UMI.)

  10. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  11. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  12. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  13. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  14. Reliable Transition State Searches Integrated with the Growing String Method.

    PubMed

    Zimmerman, Paul

    2013-07-09

    The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.

  15. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  16. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  17. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  18. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    DOE PAGES

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...

    2013-07-18

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  19. Dual modality surgical guidance of non-palpable breast lesions

    NASA Astrophysics Data System (ADS)

    Judy, Patricia Goodale

    Although breast conserving therapy has some advantages over the traditional mastectomy procedure, the biggest disadvantage is the chance of local re-occurrence in which a second surgery is often required. Adequate surgical removal of breast tumors requires accurate tumor localization in order to ensure a balance between optimal cosmetic results and minimization of the risk for local re-occurrence. These challenges have motivated the search for alternative, more accurate methods for intraoperative localization of non-palpable breast lesions. The overall goal of this project was to develop an innovative technique for radioguided localization of non-palpable breast lesions that is more accurate, easier for the breast surgeon, and more comfortable for the patient than the current practice of wire localization. The technique uses a dual modality breast imaging system to place a marker composed of radiolabeled albumin (99mTc-MAA or 111ln-MAA) into the lesion. Preliminary studies were made to evaluate the localization accuracy of the system, which showed that the dual modality breast scanner is capable of accurate 3-dimensional localization using either X-ray or gamma ray imaging. A 3-axis needle positioning system was built and integrated into the dual modality breast scanner and its accuracy tested. A pilot clinical trial to evaluate the dual-modality surgical guidance technique was designed and preliminary clinical data collected. Detailed results were presented on the first three subjects; although a total of seven subjects have been recruited to the study to date. So far, it has been demonstrated that the radioguided surgery technique can be performed with approximately 10 times less radiomarker activity than is currently being used by other researchers employing 99mTc-MAA as a radiomarker, while maintaining comparable localization accuracy. Although the DMSG technique has not been tested in a large cohort of subjects, the preliminary data on the first few are encouraging. Feedback on the technique from the surgeons, for this limited population, has been positive. Recruitment to the study is ongoing.

  20. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  1. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  2. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  3. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  4. A New Stochastic Technique for Painlevé Equation-I Using Neural Network Optimized with Swarm Intelligence

    PubMed Central

    Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor

    2012-01-01

    A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371

  5. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  6. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  7. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE PAGES

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; ...

    2018-05-29

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  8. The effective local potential method: Implementation for molecules and relation to approximate optimized effective potential techniques

    NASA Astrophysics Data System (ADS)

    Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric

    2007-02-01

    We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.

  9. Enhanced Approximate Nearest Neighbor via Local Area Focused Search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzales, Antonio; Blazier, Nicholas Paul

    Approximate Nearest Neighbor (ANN) algorithms are increasingly important in machine learning, data mining, and image processing applications. There is a large family of space- partitioning ANN algorithms, such as randomized KD-Trees, that work well in practice but are limited by an exponential increase in similarity comparisons required to optimize recall. Additionally, they only support a small set of similarity metrics. We present Local Area Fo- cused Search (LAFS), a method that enhances the way queries are performed using an existing ANN index. Instead of a single query, LAFS performs a number of smaller (fewer similarity comparisons) queries and focuses onmore » a local neighborhood which is refined as candidates are identified. We show that our technique improves performance on several well known datasets and is easily extended to general similarity metrics using kernel projection techniques.« less

  10. A new statistical tool for NOAA local climate studies

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Meyers, J. C.; Hollingshead, A.

    2011-12-01

    The National Weather Services (NWS) Local Climate Analysis Tool (LCAT) is evolving out of a need to support and enhance the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) field offices' ability to efficiently access, manipulate, and interpret local climate data and characterize climate variability and change impacts. LCAT will enable NOAA's staff to conduct regional and local climate studies using state-of-the-art station and reanalysis gridded data and various statistical techniques for climate analysis. The analysis results will be used for climate services to guide local decision makers in weather and climate sensitive actions and to deliver information to the general public. LCAT will augment current climate reference materials with information pertinent to the local and regional levels as they apply to diverse variables appropriate to each locality. The LCAT main emphasis is to enable studies of extreme meteorological and hydrological events such as tornadoes, flood, drought, severe storms, etc. LCAT will close a very critical gap in NWS local climate services because it will allow addressing climate variables beyond average temperature and total precipitation. NWS external partners and government agencies will benefit from the LCAT outputs that could be easily incorporated into their own analysis and/or delivery systems. Presently we identified five existing requirements for local climate: (1) Local impacts of climate change; (2) Local impacts of climate variability; (3) Drought studies; (4) Attribution of severe meteorological and hydrological events; and (5) Climate studies for water resources. The methodologies for the first three requirements will be included in the LCAT first phase implementation. Local rate of climate change is defined as a slope of the mean trend estimated from the ensemble of three trend techniques: (1) hinge, (2) Optimal Climate Normals (running mean for optimal time periods), (3) exponentially-weighted moving average. Root mean squared error is used to determine the best fit of trend to the observations with the least error. The studies of climate variability impacts on local extremes use composite techniques applied to various definitions of local variables: from specified percentiles to critical thresholds. Drought studies combine visual capabilities of Google maps with statistical estimates of drought severity indices. The process of development will be linked to local office interactions with users to ensure the tool will meet their needs as well as provide adequate training. A rigorous internal and tiered peer-review process will be implemented to ensure the studies are scientifically-sound that will be published and submitted to the local studies catalog (database) and eventually to external sources, such as the Climate Portal.

  11. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  12. Secondary procedures in maxillofacial dermatology.

    PubMed

    Henderson, James M; Horswell, Bruce B

    2005-05-01

    Dermatologic secondary procedures involve careful preoperative planning and patient preparation, skillful execution of the appropriate procedure, and thorough postoperative wound care. Many modalities of treatment are used, including skin preparation through elimination of inflammatory conditions, resurfacing of skin, and improvement of patient health. Proper selection of incisional design, local or regional flaps, and grafting techniques is key to successful revisional surgery. Care of the revised lesion or wound through medications, dressings and resurfacing techniques will optimize the end result.

  13. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  14. Nurse Scheduling by Cooperative GA with Effective Mutation Operator

    NASA Astrophysics Data System (ADS)

    Ohki, Makoto

    In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.

  15. Detecting Tie2, an endothelial growth factor receptor, by using immunohistochemistry in mouse lungs.

    PubMed

    Guha, Prajna P; David, Sascha A; Ghosh, Chandra C

    2014-01-01

    Immunohistochemical (IHC) staining is an invaluable, sensitive, and effective method to detect the presence and localization of proteins in the cellular compartment in tissues. The basic concept of IHC is detecting the antigen in tissues by means of specific antibody binding, which is then demonstrated with a colored histochemical reaction that can be observed under a light microscope. The most challenging aspect of IHC techniques is optimizing the precise experimental conditions that are required to get a specific and a strong signal. The critical steps of IHC are specimen acquisition, fixation, permeabilization, detection system, and selection of the antigen specific antibody and its optimization. Here, we elaborate the technique using the endothelial growth factor binding receptor Tie2 in mouse lungs.

  16. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  17. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  18. Generation and optimization of superpixels as image processing kernels for Jones matrix optical coherence tomography

    PubMed Central

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki

    2017-01-01

    Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073

  19. A coherent detection technique via optically biased field for broadband terahertz radiation.

    PubMed

    Du, Hai-Wei; Dong, Jia-Meng; Liu, Yi; Shi, Chang-Cheng; Wu, Jing-Wei; Peng, Xiao-Yu

    2017-09-01

    We demonstrate theoretically and experimentally a coherent terahertz detection technique based on an optically biased field functioning as a local oscillator and a second harmonic induced by the terahertz electric field in the air sensor working in free space. After optimizing the polarization angle and the energy of the probe pulse, and filling the system with dry nitrogen, the terahertz radiation generated from a two-color-femtosecond-laser-pulses induced plasma filament is measured by this technique with a bandwidth of 0.1-10 THz and a signal-to-noise ratio of 48 dB. Our technique provides an alternative simple method for coherent broadband terahertz detection.

  20. Cartilage segmentation of 3D MRI scans of the osteoarthritic knee combining user knowledge and active contours

    NASA Astrophysics Data System (ADS)

    Lynch, John A.; Zaim, Souhil; Zhao, Jenny; Stork, Alexander; Peterfy, Charles G.; Genant, Harry K.

    2000-06-01

    A technique for segmentation of articular cartilage from 3D MRI scans of the knee has been developed. It overcomes the limitations of the conventionally used region growing techniques, which are prone to inter- and intra-observer variability, and which can require much manual intervention. We describe a hybrid segmentation method combining expert knowledge with directionally oriented Canny filters, cost functions and cubic splines. After manual initialization, the technique utilized 3 cost functions which aided automated detection of cartilage and its boundaries. Using the sign of the edge strength, and the local direction of the boundary, this technique is more reliable than conventional 'snakes,' and the user had little control over smoothness of boundaries. This means that the automatically detected boundary can conform to the true shape of the real boundary, also allowing reliable detection of subtle local lesions on the normally smooth cartilage surface. Manual corrections, with possible re-optimization were sometimes needed. When compared to the conventionally used region growing techniques, this newly described technique measured local cartilage volume with 3 times better reproducibility, and involved two thirds less human interaction. Combined with the use of 3D image registration, the new technique should also permit unbiased segmentation of followup scans by automated initialization from a baseline segmentation of an earlier scan of the same patient.

  1. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller.

    PubMed

    Niamul Islam, Naz; Hannan, M A; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation.

  2. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    ERIC Educational Resources Information Center

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  3. Adjuvant radiation therapy for pancreatic cancer: a review of the old and the new.

    PubMed

    Boyle, John; Czito, Brian; Willett, Christopher; Palta, Manisha

    2015-08-01

    Surgery represents the only potential curative treatment option for patients diagnosed with pancreatic adenocarcinoma. Despite aggressive surgical management for patients deemed to be resectable, rates of local recurrence and/or distant metastases remain high, resulting in poor long-term outcomes. In an effort to reduce recurrence rates and improve survival for patients having undergone resection, adjuvant therapies (ATs) including chemotherapy and chemoradiation therapy (CRT) have been explored. While adjuvant chemotherapy has been shown to consistently improve outcomes, the data regarding adjuvant radiation therapy (RT) is mixed. Although the ability of radiation to improve local control has been demonstrated, it has not always led to improved survival outcomes for patients. Early trials are flawed in their utilization of sub-optimal radiation techniques, limiting their generalizability. Recent and ongoing trials incorporate more optimized RT approaches and seek to clarify its role in treatment strategies. At the same time novel radiation techniques such as intensity modulated RT (IMRT) and stereotactic body RT (SBRT) are under active investigation. It is hoped that these efforts will lead to improved disease-related outcomes while reducing toxicity rates.

  4. Adjuvant radiation therapy for pancreatic cancer: a review of the old and the new

    PubMed Central

    Boyle, John; Czito, Brian; Willett, Christopher

    2015-01-01

    Surgery represents the only potential curative treatment option for patients diagnosed with pancreatic adenocarcinoma. Despite aggressive surgical management for patients deemed to be resectable, rates of local recurrence and/or distant metastases remain high, resulting in poor long-term outcomes. In an effort to reduce recurrence rates and improve survival for patients having undergone resection, adjuvant therapies (ATs) including chemotherapy and chemoradiation therapy (CRT) have been explored. While adjuvant chemotherapy has been shown to consistently improve outcomes, the data regarding adjuvant radiation therapy (RT) is mixed. Although the ability of radiation to improve local control has been demonstrated, it has not always led to improved survival outcomes for patients. Early trials are flawed in their utilization of sub-optimal radiation techniques, limiting their generalizability. Recent and ongoing trials incorporate more optimized RT approaches and seek to clarify its role in treatment strategies. At the same time novel radiation techniques such as intensity modulated RT (IMRT) and stereotactic body RT (SBRT) are under active investigation. It is hoped that these efforts will lead to improved disease-related outcomes while reducing toxicity rates. PMID:26261730

  5. A Hybrid Ant Colony Optimization Algorithm for the Extended Capacitated Arc Routing Problem.

    PubMed

    Li-Ning Xing; Rohlfshagen, P; Ying-Wu Chen; Xin Yao

    2011-08-01

    The capacitated arc routing problem (CARP) is representative of numerous practical applications, and in order to widen its scope, we consider an extended version of this problem that entails both total service time and fixed investment costs. We subsequently propose a hybrid ant colony optimization (ACO) algorithm (HACOA) to solve instances of the extended CARP. This approach is characterized by the exploitation of heuristic information, adaptive parameters, and local optimization techniques: Two kinds of heuristic information, arc cluster information and arc priority information, are obtained continuously from the solutions sampled to guide the subsequent optimization process. The adaptive parameters ease the burden of choosing initial values and facilitate improved and more robust results. Finally, local optimization, based on the two-opt heuristic, is employed to improve the overall performance of the proposed algorithm. The resulting HACOA is tested on four sets of benchmark problems containing a total of 87 instances with up to 140 nodes and 380 arcs. In order to evaluate the effectiveness of the proposed method, some existing capacitated arc routing heuristics are extended to cope with the extended version of this problem; the experimental results indicate that the proposed ACO method outperforms these heuristics.

  6. Pulse shape optimization for electron-positron production in rotating fields

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, François; Hebenstreit, Florian; Gagnon, Denis; MacLean, Steve

    2017-07-01

    We optimize the pulse shape and polarization of time-dependent electric fields to maximize the production of electron-positron pairs via strong field quantum electrodynamics processes. The pulse is parametrized in Fourier space by a B -spline polynomial basis, which results in a relatively low-dimensional parameter space while still allowing for a large number of electric field modes. The optimization is performed by using a parallel implementation of the differential evolution, one of the most efficient metaheuristic algorithms. The computational performance of the numerical method and the results on pair production are compared with a local multistart optimization algorithm. These techniques allow us to determine the pulse shape and field polarization that maximize the number of produced pairs in computationally accessible regimes.

  7. Hybrid glowworm swarm optimization for task scheduling in the cloud environment

    NASA Astrophysics Data System (ADS)

    Zhou, Jing; Dong, Shoubin

    2018-06-01

    In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.

  8. Optimization of spatiotemporally fractionated radiotherapy treatments with bounds on the achievable benefit

    NASA Astrophysics Data System (ADS)

    Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid

    2018-01-01

    Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.

  9. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  10. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  11. Finding Statistically Significant Communities in Networks

    PubMed Central

    Lancichinetti, Andrea; Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo

    2011-01-01

    Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions/covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http://www.oslom.org), and we believe it will be a valuable tool in the analysis of networks. PMID:21559480

  12. Multiscale techniques for parabolic equations.

    PubMed

    Målqvist, Axel; Persson, Anna

    2018-01-01

    We use the local orthogonal decomposition technique introduced in Målqvist and Peterseim (Math Comput 83(290):2583-2603, 2014) to derive a generalized finite element method for linear and semilinear parabolic equations with spatial multiscale coefficients. We consider nonsmooth initial data and a backward Euler scheme for the temporal discretization. Optimal order convergence rate, depending only on the contrast, but not on the variations of the coefficients, is proven in the [Formula: see text]-norm. We present numerical examples, which confirm our theoretical findings.

  13. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Some objectives of this geodynamic program are: (1) optimal utilization of laser and VLBI observations as reference frames for geodynamics, (2) utilization of range difference observations in geodynamics, and (3) estimation techniques in crustal deformation analysis. The determination of Earth rotation parameters from different space geodetic systems is studied. Also reported on is the utilization of simultaneous laser range differences for the determination of baseline variation. An algorithm for the analysis of regional or local crustal deformation measurements is proposed along with other techniques and testing procedures. Some results of the reference from comparisons in terms of the pole coordinates from different techniques are presented.

  14. Time-distance domain transformation for Acoustic Emission source localization in thin metallic plates.

    PubMed

    Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel

    2016-05-01

    Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. One-step fabrication of submicrostructures by low one-photon absorption direct laser writing technique with local thermal effect

    NASA Astrophysics Data System (ADS)

    Nguyen, Dam Thuy Trang; Tong, Quang Cong; Ledoux-Rak, Isabelle; Lai, Ngoc Diep

    2016-01-01

    In this work, local thermal effect induced by a continuous-wave laser has been investigated and exploited to optimize the low one-photon absorption (LOPA) direct laser writing (DLW) technique for fabrication of polymer-based microstructures. It was demonstrated that the temperature of excited SU8 photoresist at the focusing area increases to above 100 °C due to high excitation intensity and becomes stable at that temperature thanks to the use of a continuous-wave laser at 532 nm-wavelength. This optically induced thermal effect immediately completes the crosslinking process at the photopolymerized region, allowing obtain desired structures without using the conventional post-exposure bake (PEB) step, which is usually realized after the exposure. Theoretical calculation of the temperature distribution induced by local optical excitation using finite element method confirmed the experimental results. LOPA-based DLW technique combined with optically induced thermal effect (local PEB) shows great advantages over the traditional PEB, such as simple, short fabrication time, high resolution. In particular, it allowed the overcoming of the accumulation effect inherently existed in optical lithography by one-photon absorption process, resulting in small and uniform structures with very short lattice constant.

  16. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  17. Enhancement of ultracold molecule formation by local control in the nanosecond regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carini, J. L.; Kallush, S.; Kosloff, R.

    2015-02-01

    We describe quantum simulations of ultracold 87Rb 2 molecule formation using photoassociation (PA) with nanosecond-time-scale pulses of frequency chirped light. In particular, we compare the case of a linear chirp to one where the frequency evolution is optimized by local control (LC) of the phase, and find that LC can provide a significant enhancement. The resulting optimal frequency evolution corresponds to a rapid jump from the PA absorption resonance to a downward transition to a bound level of the lowest triplet state. We also consider the case of two frequencies and investigate interference effects. The assumed chirp parameters should bemore » achievable with nanosecond pulse shaping techniques and are predicted to provide a significant enhancement over recent experiments with linear chirps.« less

  18. Direct phase measurement in zonal wavefront reconstruction using multidither coherent optical adaptive technique.

    PubMed

    Liu, Rui; Milkie, Daniel E; Kerlin, Aaron; MacLennan, Bryan; Ji, Na

    2014-01-27

    In traditional zonal wavefront sensing for adaptive optics, after local wavefront gradients are obtained, the entire wavefront can be calculated by assuming that the wavefront is a continuous surface. Such an approach will lead to sub-optimal performance in reconstructing wavefronts which are either discontinuous or undersampled by the zonal wavefront sensor. Here, we report a new method to reconstruct the wavefront by directly measuring local wavefront phases in parallel using multidither coherent optical adaptive technique. This method determines the relative phases of each pupil segment independently, and thus produces an accurate wavefront for even discontinuous wavefronts. We implemented this method in an adaptive optical two-photon fluorescence microscopy and demonstrated its superior performance in correcting large or discontinuous aberrations.

  19. Complex surgery for locally advanced bone and soft tissue sarcomas of the shoulder girdle.

    PubMed

    Lesenský, Jan; Mavrogenis, Andreas F; Igoumenou, Vasilios G; Matejovsky, Zdenek; Nemec, Karel; Papagelopoulos, Panayiotis J; Fabbri, Nicola

    2017-08-01

    Surgical management of primary musculoskeletal tumors of the shoulder girdle is cognitively and technically demanding. Over the last decades, advances in the medical treatments, imaging and surgical techniques have fostered limb salvage surgery and reduced the need for amputation. Despite well-accepted general principles, an individualized approach is often necessary to accommodate tumor extension, anatomical challenges and patient characteristics. A combination of techniques is often required to achieve optimal oncologic and durable functional outcome. Goal of this article is to review approach and management of patients with locally advanced sarcomas of the shoulder girdle requiring major tumor surgery, to illustrate principles of surgical strategy, outcome and complications, and to provide useful guidelines for the treating physicians.

  20. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  1. Ant-cuckoo colony optimization for feature selection in digital mammogram.

    PubMed

    Jona, J B; Nagaveni, N

    2014-01-15

    Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques.

  2. The Aeronautical Data Link: Taxonomy, Architectural Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Goode, Plesent W.

    2002-01-01

    The future Communication, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) System will rely on global satellite navigation, and ground-based and satellite based communications via Multi-Protocol Networks (e.g. combined Aeronautical Telecommunications Network (ATN)/Internet Protocol (IP)) to bring about needed improvements in efficiency and safety of operations to meet increasing levels of air traffic. This paper will discuss the development of an approach that completely describes optimal data link architecture configuration and behavior to meet the multiple conflicting objectives of concurrent and different operations functions. The practical application of the approach enables the design and assessment of configurations relative to airspace operations phases. The approach includes a formal taxonomic classification, an architectural analysis methodology, and optimization techniques. The formal taxonomic classification provides a multidimensional correlation of data link performance with data link service, information protocol, spectrum, and technology mode; and to flight operations phase and environment. The architectural analysis methodology assesses the impact of a specific architecture configuration and behavior on the local ATM system performance. Deterministic and stochastic optimization techniques maximize architectural design effectiveness while addressing operational, technology, and policy constraints.

  3. Global-Local Analysis and Optimization of a Composite Civil Tilt-Rotor Wing

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masound

    1999-01-01

    This report gives highlights of an investigation on the design and optimization of a thin composite wing box structure for a civil tilt-rotor aircraft. Two different concepts are considered for the cantilever wing: (a) a thin monolithic skin design, and (b) a thick sandwich skin design. Each concept is examined with three different skin ply patterns based on various combinations of 0, +/-45, and 90 degree plies. The global-local technique is used in the analysis and optimization of the six design models. The global analysis is based on a finite element model of the wing-pylon configuration while the local analysis uses a uniformly supported plate representing a wing panel. Design allowables include those on vibration frequencies, panel buckling, and material strength. The design optimization problem is formulated as one of minimizing the structural weight subject to strength, stiffness, and d,vnamic constraints. Six different loading conditions based on three different flight modes are considered in the design optimization. The results of this investigation reveal that of all the loading conditions the one corresponding to the rolling pull-out in the airplane mode is the most stringent. Also the frequency constraints are found to drive the skin thickness limits, rendering the buckling constraints inactive. The optimum skin ply pattern for the monolithic skin concept is found to be (((0/+/-45/90/(0/90)(sub 2))(sub s))(sub s), while for the sandwich skin concept the optimal ply pattern is found to be ((0/+/-45/90)(sub 2s))(sub s).

  4. Nonlinear optimization-based device-free localization with outlier link rejection.

    PubMed

    Xiao, Wendong; Song, Biao; Yu, Xiting; Chen, Peiyuan

    2015-04-07

    Device-free localization (DFL) is an emerging wireless technique for estimating the location of target that does not have any attached electronic device. It has found extensive use in Smart City applications such as healthcare at home and hospitals, location-based services at smart spaces, city emergency response and infrastructure security. In DFL, wireless devices are used as sensors that can sense the target by transmitting and receiving wireless signals collaboratively. Many DFL systems are implemented based on received signal strength (RSS) measurements and the location of the target is estimated by detecting the changes of the RSS measurements of the wireless links. Due to the uncertainty of the wireless channel, certain links may be seriously polluted and result in erroneous detection. In this paper, we propose a novel nonlinear optimization approach with outlier link rejection (NOOLR) for RSS-based DFL. It consists of three key strategies, including: (1) affected link identification by differential RSS detection; (2) outlier link rejection via geometrical positional relationship among links; (3) target location estimation by formulating and solving a nonlinear optimization problem. Experimental results demonstrate that NOOLR is robust to the fluctuation of the wireless signals with superior localization accuracy compared with the existing Radio Tomographic Imaging (RTI) approach.

  5. SU-E-T-332: Dosimetric Impact of Photon Energy and Treatment Technique When Knowledge Based Auto-Planning Is Implemented in Radiotherapy of Localized Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z; Kennedy, A; Larsen, E

    2015-06-15

    Purpose: The aim of this study was to investigate the dosimetric impact of the combination of photon energy and treatment technique on radiotherapy of localized prostate cancer when knowledge based planning was used. Methods: A total of 16 patients with localized prostate cancer were retrospectively retrieved from database and used for this study. For each patient, four types of treatment plans with different combinations of photon energy (6X and 10X) and treatment techniques (7-field IMRT and 2-arc VMAT) were created using a prostate DVH estimation model in RapidPlan™ and Eclipse treatment planning system (Varian Medical System). For any beam arrangement,more » DVH objectives and weighting priorities were generated based on the geometric relationship between the OAR and PTV. Photon optimization algorithm was used for plan optimization and AAA algorithm was used for final dose calculation. Plans were evaluated in terms of the pre-defined dosimetric endpoints for PTV, rectum, bladder, penile bulb, and femur heads. A Student’s paired t-test was used for statistical analysis and p > 0.05 was considered statistically significant. Results: For PTV, V95 was statistically similar among all four types of plans, though the mean dose of 10X plans was higher than that of 6X plans. VMAT plans showed higher heterogeneity index than IMRT plans. No statistically significant difference in dosimetry metrics was observed for rectum, bladder, and penile bulb among plan types. For left and right femur, VMAT plans had a higher mean dose than IMRT plans regardless of photon energy, whereas the maximum dose was similar. Conclusion: Overall, the dosimetric endpoints were similar regardless of photon energy and treatment techniques when knowledge based auto planning was used. Given the similarity in dosimetry metrics of rectum, bladder, and penile bulb, the genitourinary and gastrointestinal toxicities should be comparable among the selections of photon energy and treatment techniques.« less

  6. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mardechay

    1992-01-01

    The purpose of the research project was to continue the development of new methods for efficient aeroservoelastic analysis and optimization. The main targets were as follows: to complete the development of analytical tools for the investigation of flutter with large stiffness changes; to continue the work on efficient continuous gust response and sensitivity derivatives; and to advance the techniques of calculating dynamic loads with control and unsteady aerodynamic effects. An efficient and highly accurate mathematical model for time-domain analysis of flutter during which large structural changes occur was developed in cooperation with Carol D. Wieseman of NASA LaRC. The model was based on the second-year work 'Modal Coordinates for Aeroelastic Analysis with Large Local Structural Variations'. The work on continuous gust response was completed. An abstract of the paper 'Continuous Gust Response and Sensitivity Derivatives Using State-Space Models' was submitted for presentation in the 33rd Israel Annual Conference on Aviation and Astronautics, Feb. 1993. The abstract is given in Appendix A. The work extends the optimization model to deal with continuous gust objectives in a way that facilitates their inclusion in the efficient multi-disciplinary optimization scheme. Currently under development is a work designed to extend the analysis and optimization capabilities to loads and stress considerations. The work is on aircraft dynamic loads in response to impulsive and non-impulsive excitation. The work extends the formulations of the mode-displacement and summation-of-forces methods to include modes with significant local distortions, and load modes. An abstract of the paper,'Structural Dynamic Loads in Response to Impulsive Excitation' is given in appendix B. Another work performed this year under the Grant was 'Size-Reduction Techniques for the Determination of Efficient Aeroservoelastic Models' given in Appendix C.

  7. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  8. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  9. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  10. Enhanced simulator software for image validation and interpretation for multimodal localization super-resolution fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Erdélyi, Miklós; Sinkó, József; Gajdos, Tamás.; Novák, Tibor

    2017-02-01

    Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the point spread functions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.

  11. Electrically tunable spin filtering for electron tunneling between spin-resolved quantum Hall edge states and a quantum dot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiyama, H., E-mail: kiyama@meso.t.u-tokyo.ac.jp; Fujita, T.; Teraoka, S.

    2014-06-30

    Spin filtering with electrically tunable efficiency is achieved for electron tunneling between a quantum dot and spin-resolved quantum Hall edge states by locally gating the two-dimensional electron gas (2DEG) leads near the tunnel junction to the dot. The local gating can change the potential gradient in the 2DEG and consequently the edge state separation. We use this technique to electrically control the ratio of the dot–edge state tunnel coupling between opposite spins and finally increase spin filtering efficiency up to 91%, the highest ever reported, by optimizing the local gating.

  12. Estimating A Reference Standard Segmentation With Spatially Varying Performance Parameters: Local MAP STAPLE

    PubMed Central

    Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.

    2012-01-01

    We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727

  13. Solving traveling salesman problems with DNA molecules encoding numerical values.

    PubMed

    Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak

    2004-12-01

    We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.

  14. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  15. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  16. Experimental test of an online ion-optics optimizer

    NASA Astrophysics Data System (ADS)

    Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.

    2018-07-01

    A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.

  17. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  18. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.

  19. Phase space localization for anti-de Sitter quantum mechanics and its zero curvature limit

    NASA Technical Reports Server (NTRS)

    Elgradechi, Amine M.

    1993-01-01

    Using techniques of geometric quantization and SO(sub 0)(3,2)-coherent states, a notion of optimal localization on phase space is defined for the quantum theory of a massive and spinning particle in anti-de Sitter space time. It is shown that this notion disappears in the zero curvature limit, providing one with a concrete example of the regularizing character of the constant (nonzero) curvature of the anti-de Sitter space time. As a byproduct a geometric characterization of masslessness is obtained.

  20. Magnetoelectric force microscopy based on magnetic force microscopy with modulated electric field.

    PubMed

    Geng, Yanan; Wu, Weida

    2014-05-01

    We present the realization of a mesoscopic imaging technique, namely, the Magnetoelectric Force Microscopy (MeFM), for visualization of local magnetoelectric effect. The basic principle of MeFM is the lock-in detection of local magnetoelectric response, i.e., the electric field-induced magnetization, using magnetic force microscopy. We demonstrate MeFM capability by visualizing magnetoelectric domains on single crystals of multiferroic hexagonal manganites. Results of several control experiments exclude artifacts or extrinsic origins of the MeFM signal. The parameters are tuned to optimize the signal to noise ratio.

  1. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  2. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  3. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  4. Dosimetric Comparison of Intensity-Modulated Stereotactic Radiotherapy With Other Stereotactic Techniques for Locally Recurrent Nasopharyngeal Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kung, Shiris Wai Sum; Wu, Vincent Wing Cheung; Kam, Michael Koon Ming, E-mail: kamkm@yahoo.co

    2011-01-01

    Purpose: Locally recurrent nasopharyngeal carcinoma (NPC) patients can be salvaged by reirradiation with a substantial degree of radiation-related complications. Stereotactic radiotherapy (SRT) is widely used in this regard because of its rapid dose falloff and high geometric precision. The aim of this study was to examine whether the newly developed intensity-modulated stereotactic radiotherapy (IMSRT) has any dosimetric advantages over three other stereotactic techniques, including circular arc (CARC), static conformal beam (SmMLC), and dynamic conformal arc (mARC), in treating locally recurrent NPC. Methods and Materials: Computed tomography images of 32 patients with locally recurrent NPC, previously treated with SRT, were retrievedmore » from the stereotactic planning system for contouring and computing treatment plans. Treatment planning of each patient was performed for the four treatment techniques: CARC, SmMLC, mARC, and IMSRT. The conformity index (CI) and homogeneity index (HI) of the planning target volume (PTV) and doses to the organs at risk (OARs) and normal tissue were compared. Results: All four techniques delivered adequate doses to the PTV. IMSRT, SmMLC, and mARC delivered reasonably conformal and homogenous dose to the PTV (CI <1.47, HI <0.53), but not for CARC (p < 0.05). IMSRT presented with the smallest CI (1.37) and HI (0.40). Among the four techniques, IMSRT spared the greatest number of OARs, namely brainstem, temporal lobes, optic chiasm, and optic nerve, and had the smallest normal tissue volume in the low-dose region. Conclusion: Based on the dosimetric comparison, IMSRT was optimal for locally recurrent NPC by delivering a conformal and homogenous dose to the PTV while sparing OARs.« less

  5. Demodulation techniques for the amplitude modulated laser imager

    NASA Astrophysics Data System (ADS)

    Mullen, Linda; Laux, Alan; Cochenour, Brandon; Zege, Eleonora P.; Katsev, Iosif L.; Prikhach, Alexander S.

    2007-10-01

    A new technique has been found that uses in-phase and quadrature phase (I/Q) demodulation to optimize the images produced with an amplitude-modulated laser imaging system. An I/Q demodulator was used to collect the I/Q components of the received modulation envelope. It was discovered that by adjusting the local oscillator phase and the modulation frequency, the backscatter and target signals can be analyzed separately via the I/Q components. This new approach enhances image contrast beyond what was achieved with a previous design that processed only the composite magnitude information.

  6. Topology-optimized metasurfaces: impact of initial geometric layout.

    PubMed

    Yang, Jianji; Fan, Jonathan A

    2017-08-15

    Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.

  7. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  8. The dual role of fragments in fragment-assembly methods for de novo protein structure prediction

    PubMed Central

    Handl, Julia; Knowles, Joshua; Vernon, Robert; Baker, David; Lovell, Simon C.

    2013-01-01

    In fragment-assembly techniques for protein structure prediction, models of protein structure are assembled from fragments of known protein structures. This process is typically guided by a knowledge-based energy function and uses a heuristic optimization method. The fragments play two important roles in this process: they define the set of structural parameters available, and they also assume the role of the main variation operators that are used by the optimiser. Previous analysis has typically focused on the first of these roles. In particular, the relationship between local amino acid sequence and local protein structure has been studied by a range of authors. The correlation between the two has been shown to vary with the window length considered, and the results of these analyses have informed directly the choice of fragment length in state-of-the-art prediction techniques. Here, we focus on the second role of fragments and aim to determine the effect of fragment length from an optimization perspective. We use theoretical analyses to reveal how the size and structure of the search space changes as a function of insertion length. Furthermore, empirical analyses are used to explore additional ways in which the size of the fragment insertion influences the search both in a simulation model and for the fragment-assembly technique, Rosetta. PMID:22095594

  9. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  10. On the chemical homogeneity of In xGa 1–xN alloys – Electron microscopy at the edge of technical limits

    DOE PAGES

    Specht, Petra; Kisielowski, Christian

    2016-08-30

    Ternary In xGa 1–xN alloys became technologically attractive when p-doping was achieved to produce blue and green light emitting diodes (LED)s. Starting in the mid 1990th, investigations of their chemical homogeneity were driven by the need to understand carrier recombination mechanisms in optical device structures to optimize their performance. Transmission electron microscopy (TEM) is the technique of choice to complement optical data evaluations, which suggests the coexistence of local carrier recombination mechanisms based on piezoelectric field effects and on indium clustering in the quantum wells of LEDs. We summarize the historic context of homogeneity investigations using electron microscopy techniques thatmore » can principally resolve the question of indium segregation and clustering in In xGa 1–xN alloys if optimal sample preparation and electron dose-controlled imaging techniques are employed together with advanced data evaluation.« less

  11. Different types of maximum power point tracking techniques for renewable energy systems: A survey

    NASA Astrophysics Data System (ADS)

    Khan, Mohammad Junaid; Shukla, Praveen; Mustafa, Rashid; Chatterji, S.; Mathew, Lini

    2016-03-01

    Global demand for electricity is increasing while production of energy from fossil fuels is declining and therefore the obvious choice of the clean energy source that is abundant and could provide security for development future is energy from the sun. In this paper, the characteristic of the supply voltage of the photovoltaic generator is nonlinear and exhibits multiple peaks, including many local peaks and a global peak in non-uniform irradiance. To keep global peak, MPPT is the important component of photovoltaic systems. Although many review articles discussed conventional techniques such as P & O, incremental conductance, the correlation ripple control and very few attempts have been made with intelligent MPPT techniques. This document also discusses different algorithms based on fuzzy logic, Ant Colony Optimization, Genetic Algorithm, artificial neural networks, Particle Swarm Optimization Algorithm Firefly, Extremum seeking control method and hybrid methods applied to the monitoring of maximum value of power at point in systems of photovoltaic under changing conditions of irradiance.

  12. MinFinder: Locating all the local minima of a function

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Lagaris, Isaac E.

    2006-01-01

    A new stochastic clustering algorithm is introduced that aims to locate all the local minima of a multidimensional continuous and differentiable function inside a bounded domain. The accompanying software (MinFinder) is written in ANSI C++. However, the user may code his objective function either in C++, C or Fortran 77. We compare the performance of this new method to the performance of Multistart and Topographical Multilevel Single Linkage Clustering on a set of benchmark problems. Program summaryTitle of program:MinFinder Catalogue identifier:ADWU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which is has been tested:The tool is designed to be portable in all systems running the GNU C++ compiler Installation:University of Ioannina, Greece Programming language used:GNU-C++, GNU-C, GNU Fortran 77 Memory required to execute with typical data:200 KB No. of bits in a word:32 No. of processors used:1 Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:5797 No. of bytes in distributed program, including test data, etc.:588 121 Distribution format:gzipped tar file Nature of the physical problem:A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero. Method of solution:Using a uniform pdf, points are sampled from the rectangular search domain. A clustering technique, based on a typical distance and a gradient criterion, is used to decide from which points a local search should be started. The employed local procedure is a BFGS version due to Powell. Further searching is terminated when all the local minima inside the search domain are thought to be found. This is accomplished via the double-box rule. Typical running time:Depending on the objective function

  13. Pain in children--are we accomplishing the optimal pain treatment?

    PubMed

    Lundeberg, Stefan

    2015-01-01

    Morphine, paracetamol and local anesthetics have for a long time been the foremost used analgesics in the pediatric patient by tradition but not always enough effective and associated with side effects. The purpose with this article is to propose alternative approaches in pain management, not always supported up by substantial scientific work but from a combination of science and clinical experience in the field. The scientific literature has been reviewed in parts regarding different aspects of pain assessment and analgesics used for treatment of diverse pain conditions with focus on procedural and acute pain. Clinical experience has been added to form the suggested improvements in accomplishing an improved pain management in pediatric patients. The aim with pain management in children should be a tailored analgesic medication with an individual acceptable pain level and optimal degree of mobilization with as little side effects as possible. Simple techniques of pain control are as effective as and complex techniques in pediatrics but the technique used is not of the highest importance in achieving a good pain management. Increased interest and improved education of the doctors prescribing analgesics is important in accomplishing a better pain management. The optimal treatment with analgesics is depending on the analysis of pain origin and analgesics used should be adjusted thereafter. A multimodal treatment regime is advocated for optimal analgesic effect. © 2014 John Wiley & Sons Ltd.

  14. One-step fabrication of submicrostructures by low one-photon absorption direct laser writing technique with local thermal effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dam Thuy Trang; Tong, Quang Cong; Ledoux-Rak, Isabelle

    In this work, local thermal effect induced by a continuous-wave laser has been investigated and exploited to optimize the low one-photon absorption (LOPA) direct laser writing (DLW) technique for fabrication of polymer-based microstructures. It was demonstrated that the temperature of excited SU8 photoresist at the focusing area increases to above 100 °C due to high excitation intensity and becomes stable at that temperature thanks to the use of a continuous-wave laser at 532 nm-wavelength. This optically induced thermal effect immediately completes the crosslinking process at the photopolymerized region, allowing obtain desired structures without using the conventional post-exposure bake (PEB) step, which ismore » usually realized after the exposure. Theoretical calculation of the temperature distribution induced by local optical excitation using finite element method confirmed the experimental results. LOPA-based DLW technique combined with optically induced thermal effect (local PEB) shows great advantages over the traditional PEB, such as simple, short fabrication time, high resolution. In particular, it allowed the overcoming of the accumulation effect inherently existed in optical lithography by one-photon absorption process, resulting in small and uniform structures with very short lattice constant.« less

  15. On Global Optimal Sailplane Flight Strategy

    NASA Technical Reports Server (NTRS)

    Sander, G. J.; Litt, F. X.

    1979-01-01

    The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.

  16. Imaging in rectal cancer with emphasis on local staging with MRI

    PubMed Central

    Arya, Supreeta; Das, Deepak; Engineer, Reena; Saklani, Avanish

    2015-01-01

    Imaging in rectal cancer has a vital role in staging disease, and in selecting and optimizing treatment planning. High-resolution MRI (HR-MRI) is the recommended method of first choice for local staging of rectal cancer for both primary staging and for restaging after preoperative chemoradiation (CT-RT). HR-MRI helps decide between upfront surgery and preoperative CT-RT. It provides high accuracy for prediction of circumferential resection margin at surgery, T category, and nodal status in that order. MRI also helps assess resectability after preoperative CT-RT and decide between sphincter saving or more radical surgery. Accurate technique is crucial for obtaining high-resolution images in the appropriate planes for correct staging. The phased array external coil has replaced the endorectal coil that is no longer recommended. Non-fat suppressed 2D T2-weighted (T2W) sequences in orthogonal planes to the tumor are sufficient for primary staging. Contrast-enhanced MRI is considered inappropriate for both primary staging and restaging. Diffusion-weighted sequence may be of value in restaging. Multidetector CT cannot replace MRI in local staging, but has an important role for evaluating distant metastases. Positron emission tomography-computed tomography (PET/CT) has a limited role in the initial staging of rectal cancer and is reserved for cases with resectable metastatic disease before contemplating surgery. This article briefly reviews the comprehensive role of imaging in rectal cancer, describes the role of MRI in local staging in detail, discusses the optimal MRI technique, and provides a synoptic report for both primary staging and restaging after CT-RT in routine practice. PMID:25969638

  17. A review of distributed parameter groundwater management modeling methods

    USGS Publications Warehouse

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  18. Optimal gains for a single polar orbiting satellite

    NASA Technical Reports Server (NTRS)

    Banfield, Don; Ingersoll, A. P.; Keppenne, C. L.

    1993-01-01

    Gains are the spatial weighting of an observation in its neighborhood versus the local values of a model prediction. They are the key to data assimilation, as they are the direct measure of how the data are used to guide the model. As derived in the broad context of data assimilation by Kalman and in the context of meteorology, for example, by Rutherford, the optimal gains are functions of the prediction error covariances between the observation and analysis points. Kalman introduced a very powerful technique that allows one to calculate these optimal gains at the time of each observation. Unfortunately, this technique is both computationally expensive and often numerically unstable for dynamical systems of the magnitude of meteorological models, and thus is unsuited for use in PMIRR data assimilation. However, the optimal gains as calculated by a Kalman filter do reach a steady state for regular observing patterns like that of a satellite. In this steady state, the gains are constants in time, and thus could conceivably be computed off-line. These steady-state Kalman gains (i.e., Wiener gains) would yield optimal performance without the computational burden of true Kalman filtering. We proposed to use this type of constant-in-time Wiener gain for the assimilation of data from PMIRR and Mars Observer.

  19. A Review of Distributed Parameter Groundwater Management Modeling Methods

    NASA Astrophysics Data System (ADS)

    Gorelick, Steven M.

    1983-04-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  20. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  1. Design and optimization of a brachytherapy robot

    NASA Astrophysics Data System (ADS)

    Meltsner, Michael A.

    Trans-rectal ultrasound guided (TRUS) low dose rate (LDR) interstitial brachytherapy has become a popular procedure for the treatment of prostate cancer, the most common type of non-skin cancer among men. The current TRUS technique of LDR implantation may result in less than ideal coverage of the tumor with increased risk of negative response such as rectal toxicity and urinary retention. This technique is limited by the skill of the physician performing the implant, the accuracy of needle localization, and the inherent weaknesses of the procedure itself. The treatment may require 100 or more sources and 25 needles, compounding the inaccuracy of the needle localization procedure. A robot designed for prostate brachytherapy may increase the accuracy of needle placement while minimizing the effect of physician technique in the TRUS procedure. Furthermore, a robot may improve associated toxicities by utilizing angled insertions and freeing implantations from constraints applied by the 0.5 cm-spaced template used in the TRUS method. Within our group, Lin et al. have designed a new type of LDR source. The "directional" source is a seed designed to be partially shielded. Thus, a directional, or anisotropic, source does not emit radiation in all directions. The source can be oriented to irradiate cancerous tissues while sparing normal ones. This type of source necessitates a new, highly accurate method for localization in 6 degrees of freedom. A robot is the best way to accomplish this task accurately. The following presentation of work describes the invention and optimization of a new prostate brachytherapy robot that fulfills these goals. Furthermore, some research has been dedicated to the use of the robot to perform needle insertion tasks (brachytherapy, biopsy, RF ablation, etc.) in nearly any other soft tissue in the body. This can be accomplished with the robot combined with automatic, magnetic tracking.

  2. Research reactor loading pattern optimization using estimation of distribution algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S.; Ziver, K.; AMCG Group, RM Consultants, Abingdon

    2006-07-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristicmore » Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)« less

  3. Coherent optimal control of photosynthetic molecules

    NASA Astrophysics Data System (ADS)

    Caruso, F.; Montangero, S.; Calarco, T.; Huelga, S. F.; Plenio, M. B.

    2012-04-01

    We demonstrate theoretically that open-loop quantum optimal control techniques can provide efficient tools for the verification of various quantum coherent transport mechanisms in natural and artificial light-harvesting complexes under realistic experimental conditions. To assess the feasibility of possible biocontrol experiments, we introduce the main settings and derive optimally shaped and robust laser pulses that allow for the faithful preparation of specified initial states (such as localized excitation or coherent superposition, i.e., propagating and nonpropagating states) of the photosystem and probe efficiently the subsequent dynamics. With these tools, different transport pathways can be discriminated, which should facilitate the elucidation of genuine quantum dynamical features of photosystems and therefore enhance our understanding of the role that coherent processes may play in actual biological complexes.

  4. The role of systemic therapy in the management of sinonasal cancer: A critical review.

    PubMed

    Bossi, Paolo; Saba, Nabil F; Vermorken, Jan B; Strojan, Primoz; Pala, Laura; de Bree, Remco; Rodrigo, Juan Pablo; Lopez, Fernando; Hanna, Ehab Y; Haigentz, Missak; Takes, Robert P; Slootweg, Piet J; Silver, Carl E; Rinaldo, Alessandra; Ferlito, Alfio

    2015-12-01

    Due to the rarity and the variety of histological types of sinonasal cancers, there is a paucity of data regarding strategy for their optimal treatment. Generally, outcomes of advanced and higher grade tumors remain unsatisfactory, despite the employment of sophisticated surgical approaches, technical advances in radiation techniques and the use of heavy ion particles. In this context, we critically evaluated the role of systemic therapy as part of a multidisciplinary approach to locally advanced disease. Induction chemotherapy has shown encouraging activity and could have a role in the multimodal treatment of patients with advanced sinonasal tumors. For epithelial tumors, the most frequently employed chemotherapy is cisplatin, in combination with either 5-fluorouracil, taxane, ifosfamide, or vincristine. Only limited experiences with concurrent chemoradiation exist with sinonasal cancer. The role of systemic treatment for each histological type (intestinal-type adenocarcinoma, sinonasal undifferentiated carcinoma, sinonasal neuroendocrine carcinoma, olfactory neuroblastoma, sinonasal primary mucosal melanoma, sarcoma) is discussed. The treatment of SNC requires a multimodal approach. Employment of systemic therapy for locally advanced disease could result in better outcomes, and optimize the therapeutic armamentarium. Further studies are needed to precisely define the role of systemic therapy and identify the optimal sequencing for its administration in relation to local therapies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Multiple-copy state discrimination: Thinking globally, acting locally

    NASA Astrophysics Data System (ADS)

    Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.

    2011-05-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  6. Multiple-copy state discrimination: Thinking globally, acting locally

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.

    2011-05-15

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less

  7. Local image variance of 7 Tesla SWI is a new technique for preoperative characterization of diffusely infiltrating gliomas: correlation with tumour grade and IDH1 mutational status.

    PubMed

    Grabner, Günther; Kiesel, Barbara; Wöhrer, Adelheid; Millesi, Matthias; Wurzer, Aygül; Göd, Sabine; Mallouhi, Ammar; Knosp, Engelbert; Marosi, Christine; Trattnig, Siegfried; Wolfsberger, Stefan; Preusser, Matthias; Widhalm, Georg

    2017-04-01

    To investigate the value of local image variance (LIV) as a new technique for quantification of hypointense microvascular susceptibility-weighted imaging (SWI) structures at 7 Tesla for preoperative glioma characterization. Adult patients with neuroradiologically suspected diffusely infiltrating gliomas were prospectively recruited and 7 Tesla SWI was performed in addition to standard imaging. After tumour segmentation, quantification of intratumoural SWI hypointensities was conducted by the SWI-LIV technique. Following surgery, the histopathological tumour grade and isocitrate dehydrogenase 1 (IDH1)-R132H mutational status was determined and SWI-LIV values were compared between low-grade gliomas (LGG) and high-grade gliomas (HGG), IDH1-R132H negative and positive tumours, as well as gliomas with significant and non-significant contrast-enhancement (CE) on MRI. In 30 patients, 9 LGG and 21 HGG were diagnosed. The calculation of SWI-LIV values was feasible in all tumours. Significantly higher mean SWI-LIV values were found in HGG compared to LGG (92.7 versus 30.8; p < 0.0001), IDH1-R132H negative compared to IDH1-R132H positive gliomas (109.9 versus 38.3; p < 0.0001) and tumours with significant CE compared to non-significant CE (120.1 versus 39.0; p < 0.0001). Our data indicate that 7 Tesla SWI-LIV might improve preoperative characterization of diffusely infiltrating gliomas and thus optimize patient management by quantification of hypointense microvascular structures. • 7 Tesla local image variance helps to quantify hypointense susceptibility-weighted imaging structures. • SWI-LIV is significantly increased in high-grade and IDH1-R132H negative gliomas. • SWI-LIV is a promising technique for improved preoperative glioma characterization. • Preoperative management of diffusely infiltrating gliomas will be optimized.

  8. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  9. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  10. An effective PSO-based memetic algorithm for flow shop scheduling.

    PubMed

    Liu, Bo; Wang, Ling; Jin, Yi-Hui

    2007-02-01

    This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed.

  11. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.

    PubMed

    Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo

    2015-01-01

    Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  12. Local classifier weighting by quadratic programming.

    PubMed

    Cevikalp, Hakan; Polikar, Robi

    2008-10-01

    It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.

  13. Fault identification and localization for Ethernet Passive Optical Network using L-band ASE source and various types of fiber Bragg grating

    NASA Astrophysics Data System (ADS)

    Naim, Nani Fadzlina; Bakar, A. Ashrif A.; Ab-Rahman, Mohammad Syuhaimi

    2018-01-01

    This paper presents a centralized and fault localization technique for Ethernet Passive Optical Access Network. This technique employs L-band Amplified Spontaneous Emission (ASE) as the monitoring source and various fiber Bragg Gratings (FBGs) as the fiber's identifier. An FBG with a unique combination of Bragg wavelength, reflectivity and bandwidth is inserted at each distribution fiber. The FBG reflection spectrum will be analyzed using an optical spectrum analyzer (OSA) to monitor the condition of the distribution fiber. Various FBGs reflection spectra is employed to optimize the limited bandwidth of monitoring source, thus allows more fibers to be monitored. Basically, one Bragg wavelength is shared by two distinct FBGs with different reflectivity and bandwidth. The experimental result shows that the system is capable to monitor up to 32 customers with OSNR value of ∼1.2 dB and monitoring power received of -24 dBm. This centralized and simple monitoring technique demonstrates a low power, cost efficient and low bandwidth requirement system.

  14. Mapping Epileptic Activity: Sources or Networks for the Clinicians?

    PubMed Central

    Pittau, Francesca; Mégevand, Pierre; Sheybani, Laurent; Abela, Eugenio; Grouiller, Frédéric; Spinelli, Laurent; Michel, Christoph M.; Seeck, Margitta; Vulliemoz, Serge

    2014-01-01

    Epileptic seizures of focal origin are classically considered to arise from a focal epileptogenic zone and then spread to other brain regions. This is a key concept for semiological electro-clinical correlations, localization of relevant structural lesions, and selection of patients for epilepsy surgery. Recent development in neuro-imaging and electro-physiology and combinations, thereof, have been validated as contributory tools for focus localization. In parallel, these techniques have revealed that widespread networks of brain regions, rather than a single epileptogenic region, are implicated in focal epileptic activity. Sophisticated multimodal imaging and analysis strategies of brain connectivity patterns have been developed to characterize the spatio-temporal relationships within these networks by combining the strength of both techniques to optimize spatial and temporal resolution with whole-brain coverage and directional connectivity. In this paper, we review the potential clinical contribution of these functional mapping techniques as well as invasive electrophysiology in human beings and animal models for characterizing network connectivity. PMID:25414692

  15. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  16. Stereotactic Body Radiotherapy in the Management of Oligometastatic Disease.

    PubMed

    Ahmed, Kamran A; Torres-Roca, Javier F

    2016-01-01

    The treatment of oligometastatic disease has become common as imaging techniques have advanced and the management of systemic disease has improved. Use of highly targeted, hypofractionated regimens of stereotactic body radiotherapy (SBRT) is now a primary management option for patients with oligometastatic disease. The properties of SBRT are summarized and the results of retrospective and prospective studies of SBRT use in the management of oligometastases are reviewed. Future directions of SBRT, including optimizing dose and fractionation schedules, are also discussed. SBRT can deliver highly conformal, dosed radiation treatments for ablative tumors in a few treatment sessions. Phase 1/2 trials and retrospective institutional results support use of SBRT as a treatment option for oligometastatic disease metastasized to the lung, liver, and spine, and SBRT offers adequate toxicity profiles with good rates of local control. Future directions will involve optimizing dose and fractionation schedules for select histologies to improve rates of local control while limiting toxicity to normal structures. SBRT offers an excellent management option for patients with oligometastases. However, additional research is still needed to optimize dose and fractionation schedules.

  17. State-selective optimization of local excited electronic states in extended systems

    NASA Astrophysics Data System (ADS)

    Kovyrshin, Arseny; Neugebauer, Johannes

    2010-11-01

    Standard implementations of time-dependent density-functional theory (TDDFT) for the calculation of excitation energies give access to a number of the lowest-lying electronic excitations of a molecule under study. For extended systems, this can become cumbersome if a particular excited state is sought-after because many electronic transitions may be present. This often means that even for systems of moderate size, a multitude of excited states needs to be calculated to cover a certain energy range. Here, we present an algorithm for the selective determination of predefined excited electronic states in an extended system. A guess transition density in terms of orbital transitions has to be provided for the excitation that shall be optimized. The approach employs root-homing techniques together with iterative subspace diagonalization methods to optimize the electronic transition. We illustrate the advantages of this method for solvated molecules, core-excitations of metal complexes, and adsorbates at cluster surfaces. In particular, we study the local π →π∗ excitation of a pyridine molecule adsorbed at a silver cluster. It is shown that the method works very efficiently even for high-lying excited states. We demonstrate that the assumption of a single, well-defined local excitation is, in general, not justified for extended systems, which can lead to root-switching during optimization. In those cases, the method can give important information about the spectral distribution of the orbital transition employed as a guess.

  18. Hierarchical Poly Tree Configurations for the Solution of Dynamically Refined Finte Element Models

    NASA Technical Reports Server (NTRS)

    Gute, G. D.; Padovan, J.

    1993-01-01

    This paper demonstrates how a multilevel substructuring technique, called the Hierarchical Poly Tree (HPT), can be used to integrate a localized mesh refinement into the original finite element model more efficiently. The optimal HPT configurations for solving isoparametrically square h-, p-, and hp-extensions on single and multiprocessor computers is derived. In addition, the reduced number of stiffness matrix elements that must be stored when employing this type of solution strategy is quantified. Moreover, the HPT inherently provides localize 'error-trapping' and a logical, efficient means with which to isolate physically anomalous and analytically singular behavior.

  19. Local neighborhood transition probability estimation and its use in contextual classification

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.

  20. Beyond Low-Rank Representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    PubMed

    Wang, Yang; Wu, Lin

    2018-07-01

    Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Firefly as a novel swarm intelligence variable selection method in spectroscopy.

    PubMed

    Goodarzi, Mohammad; dos Santos Coelho, Leandro

    2014-12-10

    A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.

  2. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  3. Scanning laser ophthalmoscopy: optimized testing strategies for psychophysics

    NASA Astrophysics Data System (ADS)

    Van de Velde, Frans J.

    1996-12-01

    Retinal function can be evaluated with the scanning laser ophthalmoscope (SLO). the main advantage is a precise localization of the psychophysical stimulus on the retina. Four alternative forced choice (4AFC) and parameter estimation by sequential testing (PEST) are classic adaptive algorithms that have been optimized for use with the SLO, and combined with strategies to correct for small eye movements. Efficient calibration procedures are essential for quantitative microperimetry. These techniques measure precisely visual acuity and retinal sensitivity at distinct locations on the retina. A combined 632 nm and IR Maxwellian view illumination provides a maximal transmittance through the ocular media and has a animal interference with xanthophyll or hemoglobin. Future modifications of the instrument include the possibility of binocular evaluation, Maxwellian view control, fundus tracking using normalized gray-scale correlation, and microphotocoagulation. The techniques are useful in low vision rehabilitation and the application of laser to the retina.

  4. Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.

    PubMed

    Chang, Joshua; Paydarfar, David

    2014-12-01

    Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.

  5. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  6. Reconstructing the sky location of gravitational-wave detected compact binary systems: Methodology for testing and comparison

    NASA Astrophysics Data System (ADS)

    Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.

    2014-04-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.

  7. Semi-rigid single hook localization the best method for localizing ground glass opacities during video-assisted thoracoscopic surgery: re-aerated swine lung experimental and primary clinical results

    PubMed Central

    Zhao, Guang; Sun, Long; Geng, Guojun; Liu, Hongming; Li, Ning; Liu, Suhuan; Hao, Bing

    2017-01-01

    Background The aim of this study was to compare the effects of currently available preoperative localization methods, including semi-rigid single hook-wire, double-thorn hook-wire, and microcoil, in localizing the pulmonary nodules, thus to select the best technology to assist video-assisted thoracoscopic surgery (VATS) for small ground glass opacities (GGO). Methods Preoperative CT-guided localizing techniques including semi-rigid single hook-wire, double-thorn hook-wire and microcoil were used in re-aerated fresh swine lung for location experiments. The advantages and drawbacks of the three positioning technologies were compared, and then the most optimal technique was used in patients with GGO. Technical success and post-operative complications were used as primary endpoints. Results All three localizing techniques were successfully performed in the re-aerated fresh swine lung. The median tractive force of semi-rigid single hook wire, double-thorn hook wire and microcoil were 6.5, 4.85 and 0.2 N, which measured by a spring dynamometer. The wound sizes in the superficial pleura, caused by unplugging the needles, were 2 mm in double-thorn hook wire, 1 mm in semi-rigid single hook and 1 mm in microcoil, respectively. In patients with GGOs, the semi-rigid hook wires localizations were successfully performed, without any complication that need to be intervened. Dislodgement was reported in one patient before VATS. No major complications related to the preoperative hook wire localization and VATS were observed. Conclusions We found from our localization experiments in the swine lung that, among the commonly used three localization methods, semi-rigid hook wire showed the best operability and practicability than double-thorn hook wire and microcoil. Preoperative localization of small pulmonary nodules with single semi-rigid hook wire system shows a high success rate, acceptable utility and especially low dislodgement in VATS. PMID:29312722

  8. Estimating of aquifer parameters from the single-well water-level measurements in response to advancing longwall mine by using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Karaman, Abdullah

    2017-04-01

    We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.

  9. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  10. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  11. An opinion formation based binary optimization approach for feature selection

    NASA Astrophysics Data System (ADS)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  12. Global carbon assimilation system using a local ensemble Kalman filter with multiple ecosystem models

    NASA Astrophysics Data System (ADS)

    Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze

    2014-11-01

    In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.

  13. Discriminant locality preserving projections based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu; Li, Defang

    2014-11-01

    Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.

  14. Co-optimization of lithographic and patterning processes for improved EPE performance

    NASA Astrophysics Data System (ADS)

    Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane

    2017-03-01

    Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.

  15. Analyzing speckle contrast for HiLo microscopy optimization.

    PubMed

    Mazzaferri, J; Kunik, D; Belisle, J M; Singh, K; Lefrançois, S; Costantino, S

    2011-07-18

    HiLo microscopy is a recently developed technique that provides both optical sectioning and fast imaging with a simple implementation and at a very low cost. The methodology combines widefield and speckled illumination images to obtain one optically sectioned image. Hence, the characteristics of such speckle illumination ultimately determine the quality of HiLo images and the overall performance of the method. In this work, we study how speckle contrast influence local variations of fluorescence intensity and brightness profiles of thick samples. We present this article as a guide to adjust the parameters of the system for optimizing the capabilities of this novel technology.

  16. Analyzing speckle contrast for HiLo microscopy optimization

    NASA Astrophysics Data System (ADS)

    Mazzaferri, J.; Kunik, D.; Belisle, J. M.; Singh, K.; Lefrançois, S.; Costantino, S.

    2011-07-01

    HiLo microscopy is a recently developed technique that provides both optical sectioning and fast imaging with a simple implementation and at a very low cost. The methodology combines widefield and speckled illumination images to obtain one optically sectioned image. Hence, the characteristics of such speckle illumination ultimately determine the quality of HiLo images and the overall performance of the method. In this work, we study how speckle contrast influence local variations of fluorescence intensity and brightness profiles of thick samples. We present this article as a guide to adjust the parameters of the system for optimizing the capabilities of this novel technology.

  17. Optimization of Connector Position Offset for Bandwidth Enhancement of a Multimode Optical Fiber Link

    NASA Technical Reports Server (NTRS)

    Rawat, Banmali

    2000-01-01

    The multimode fiber bandwidth enhancement techniques to meet the Gigabit Ethernet standards for local area networks (LAN) of the Kennedy Space Center and other NASA centers have been discussed. Connector with lateral offset coupling between single mode launch fiber cable and the multimode fiber cable has been thoroughly investigated. An optimization of connector position offset for 8 km long optical fiber link at 1300 nm with 9 micrometer diameter single mode fiber (SMF) and 50 micrometer diameter multimode fiber (MMF) coupling has been obtained. The optimization is done in terms of bandwidth, eye-pattern, and bit pattern measurements. It is simpler, is a highly practical approach and is cheaper as no additional cost to manufacture the offset type of connectors is involved.

  18. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  19. QoS-aware health monitoring system using cloud-based WBANs.

    PubMed

    Almashaqbeh, Ghada; Hayajneh, Thaier; Vasilakos, Athanasios V; Mohd, Bassam J

    2014-10-01

    Wireless Body Area Networks (WBANs) are amongst the best options for remote health monitoring. However, as standalone systems WBANs have many limitations due to the large amount of processed data, mobility of monitored users, and the network coverage area. Integrating WBANs with cloud computing provides effective solutions to these problems and promotes the performance of WBANs based systems. Accordingly, in this paper we propose a cloud-based real-time remote health monitoring system for tracking the health status of non-hospitalized patients while practicing their daily activities. Compared with existing cloud-based WBAN frameworks, we divide the cloud into local one, that includes the monitored users and local medical staff, and a global one that includes the outer world. The performance of the proposed framework is optimized by reducing congestion, interference, and data delivery delay while supporting users' mobility. Several novel techniques and algorithms are proposed to accomplish our objective. First, the concept of data classification and aggregation is utilized to avoid clogging the network with unnecessary data traffic. Second, a dynamic channel assignment policy is developed to distribute the WBANs associated with the users on the available frequency channels to manage interference. Third, a delay-aware routing metric is proposed to be used by the local cloud in its multi-hop communication to speed up the reporting process of the health-related data. Fourth, the delay-aware metric is further utilized by the association protocols used by the WBANs to connect with the local cloud. Finally, the system with all the proposed techniques and algorithms is evaluated using extensive ns-2 simulations. The simulation results show superior performance of the proposed architecture in optimizing the end-to-end delay, handling the increased interference levels, maximizing the network capacity, and tracking user's mobility.

  20. Local Magnetoelectric Effect in La-Doped BiFeO3 Multiferroic Thin Films Revealed by Magnetic-Field-Assisted Scanning Probe Microscopy

    NASA Astrophysics Data System (ADS)

    Pan, Dan-Feng; Zhou, Ming-Xiu; Lu, Zeng-Xing; Zhang, Hao; Liu, Jun-Ming; Wang, Guang-Hou; Wan, Jian-Guo

    2016-06-01

    Multiferroic La-doped BiFeO3 thin films have been prepared by a sol-gel plus spin-coating process, and the local magnetoelectric coupling effect has been investigated by the magnetic-field-assisted scanning probe microscopy connected with a ferroelectric analyzer. The local ferroelectric polarization response to external magnetic fields is observed and a so-called optimized magnetic field of ~40 Oe is obtained, at which the ferroelectric polarization reaches the maximum. Moreover, we carry out the magnetic-field-dependent surface conductivity measurements and illustrate the origin of local magnetoresistance in the La-doped BiFeO3 thin films, which is closely related to the local ferroelectric polarization response to external magnetic fields. This work not only provides a useful technique to characterize the local magnetoelectric coupling for a wide range of multiferroic materials but also is significant for deeply understanding the local multiferroic behaviors in the BiFeO3-based systems.

  1. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.

    PubMed

    Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C

    2017-08-15

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation

    PubMed Central

    Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.

    2018-01-01

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130

  3. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  4. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  5. Determination of the mobility profile in GaAs-MESFETs. Thesis

    NASA Technical Reports Server (NTRS)

    Prost, W.

    1985-01-01

    A process for measuring charge carrier mobility for gallium-arsenide metal semiconductor field effect transistors is described in an attempt to optimize the relationship between this factor and production. The measuring procedure allows an actual determination of local mobility in the channel. The physical basis for the process and features of the measuring room are outlined. The measuring technique is described and recommendations are made for setting measuring parameters.

  6. Flux-corrected transport algorithms for continuous Galerkin methods based on high order Bernstein finite elements

    NASA Astrophysics Data System (ADS)

    Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso

    2017-09-01

    This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.

  7. Magnetic elements for switching magnetization magnetic force microscopy tips.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cambel, V.; Elias, P.; Gregusova, D.

    2010-09-01

    Using combination of micromagnetic calculations and magnetic force microscopy (MFM) imaging we find optimal parameters for novel magnetic tips suitable for switching magnetization MFM. Switching magnetization MFM is based on two-pass scanning atomic force microscopy with reversed tip magnetization between the scans. Within the technique the sum of the scanned data with reversed tip magnetization depicts local atomic forces, while their difference maps the local magnetic forces. Here we propose the design and calculate the magnetic properties of tips suitable for this scanning probe technique. We find that for best performance the spin-polarized tips must exhibit low magnetic moment, lowmore » switching fields, and single-domain state at remanence. The switching field of such tips is calculated and optimum shape of the Permalloy elements for the tips is found. We show excellent correspondence between calculated and experimental results for Py elements.« less

  8. Effect of film thickness on localized surface plasmon enhanced chemical sensor

    NASA Astrophysics Data System (ADS)

    Kassu, Aschalew; Farley, Carlton; Sharma, Anup; Kim, Wonkyu; Guo, Junpeng

    2014-05-01

    A highly-sensitive, reliable, simple and inexpensive chemical detection and identification platform is demonstrated. The sensing technique is based on localized surface plasmon enhanced Raman scattering measurements from gold-coated highly-ordered symmetric nanoporous ceramic membranes fabricated from anodic aluminum oxide. To investigate the effects of the thickness of the sputter-coated gold films on the sensitivity of sensor, and optimize the performance of the substrates, the geometry of the nanopores and the film thicknesses are varied in the range of 30 nm to 120 nm. To characterize the sensing technique and the detection limits, surface enhanced Raman scatterings of low concentrations of a standard chemical adsorbed on the gold coated substrates are collected and analyzed. The morphology of the proposed substrates is characterized by atomic force microscopy and the optical properties including transmittance, reflectance and absorbance of each substrate are also investigated.

  9. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  10. Prediction of STN-DBS Electrode Implantation Track in Parkinson's Disease by Using Local Field Potentials

    PubMed Central

    Telkes, Ilknur; Jimenez-Shahed, Joohi; Viswanathan, Ashwin; Abosch, Aviva; Ince, Nuri F.

    2016-01-01

    Optimal electrophysiological placement of the DBS electrode may lead to better long term clinical outcomes. Inter-subject anatomical variability and limitations in stereotaxic neuroimaging increase the complexity of physiological mapping performed in the operating room. Microelectrode single unit neuronal recording remains the most common intraoperative mapping technique, but requires significant expertise and is fraught by potential technical difficulties including robust measurement of the signal. In contrast, local field potentials (LFPs), owing to their oscillatory and robust nature and being more correlated with the disease symptoms, can overcome these technical issues. Therefore, we hypothesized that multiple spectral features extracted from microelectrode-recorded LFPs could be used to automate the identification of the optimal track and the STN localization. In this regard, we recorded LFPs from microelectrodes in three tracks from 22 patients during DBS electrode implantation surgery at different depths and aimed to predict the track selected by the neurosurgeon based on the interpretation of single unit recordings. A least mean square (LMS) algorithm was used to de-correlate LFPs in each track, in order to remove common activity between channels and increase their spatial specificity. Subband power in the beta band (11–32 Hz) and high frequency range (200–450 Hz) were extracted from the de-correlated LFP data and used as features. A linear discriminant analysis (LDA) method was applied both for the localization of the dorsal border of STN and the prediction of the optimal track. By fusing the information from these low and high frequency bands, the dorsal border of STN was localized with a root mean square (RMS) error of 1.22 mm. The prediction accuracy for the optimal track was 80%. Individual beta band (11–32 Hz) and the range of high frequency oscillations (200–450 Hz) provided prediction accuracies of 72 and 68% respectively. The best prediction result obtained with monopolar LFP data was 68%. These results establish the initial evidence that LFPs can be strategically fused with computational intelligence in the operating room for STN localization and the selection of the track for chronic DBS electrode implantation. PMID:27242404

  11. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    NASA Astrophysics Data System (ADS)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  12. Video-assisted thoracic surgery for left upper lobectomy for complex lesions: how to extend the indication with optimal safety?

    PubMed

    Bayard, Nathanaël Frank; Barnett, Stephen Arthur; Rinieri, Philippe; Melki, Jean; Peillon, Christophe; Baste, Jean Marc

    2016-08-01

    The feasibility of extending the VATS approach to locally advanced NSCLC has been described with good clinical outcome. These complex resections are still technically challenging and patient safety must remain the highest priority. In this article, we describe our routine VATS approach for left upper lobectomy in proximal, locally advanced lesions. Both surgical and anaesthesiology teams are trained during simulation sessions to respond rapidly in case of urgent thoracotomy. Encircling arterial and venous vessels allow control of inadvertent bleeding during difficult dissection. Also, whenever needed the double vessel control technique is a time saver waiting for conversion to thoracotomy.

  13. A novel background field removal method for MRI using projection onto dipole fields (PDF).

    PubMed

    Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi

    2011-11-01

    For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data weremore » segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  15. Sources and remediation techniques for mercury contaminated soil.

    PubMed

    Xu, Jingying; Bravo, Andrea Garcia; Lagerkvist, Anders; Bertilsson, Stefan; Sjöblom, Rolf; Kumpiene, Jurate

    2015-01-01

    Mercury (Hg) in soils has increased by a factor of 3 to 10 in recent times mainly due to combustion of fossil fuels combined with long-range atmospheric transport processes. Other sources as chlor-alkali plants, gold mining and cement production can also be significant, at least locally. This paper summarizes the natural and anthropogenic sources that have contributed to the increase of Hg concentration in soil and reviews major remediation techniques and their applications to control soil Hg contamination. The focus is on soil washing, stabilisation/solidification, thermal treatment and biological techniques; but also the factors that influence Hg mobilisation in soil and therefore are crucial for evaluating and optimizing remediation techniques are discussed. Further research on bioremediation is encouraged and future study should focus on the implementation of different remediation techniques under field conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  17. Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution

    PubMed Central

    Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry

    2014-01-01

    One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718

  18. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  19. High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging

    PubMed Central

    Persoons, Tim; O’Donovan, Tadhg S.

    2011-01-01

    The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods. PMID:22346564

  20. High-NA metrology and sensing on Berkeley MET5

    NASA Astrophysics Data System (ADS)

    Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick

    2017-03-01

    In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.

  1. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  2. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  3. Primal-dual techniques for online algorithms and mechanisms

    NASA Astrophysics Data System (ADS)

    Liaghat, Vahid

    An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.

  4. Multidisciplinary Optimization and Damage Tolerance of Stiffened Structures

    NASA Astrophysics Data System (ADS)

    Jrad, Mohamed

    THE structural optimization of a cantilever aircraft wing with curvilinear spars and ribs and stiffeners is described. For the optimization of a complex wing, a common strategy is to divide the optimization procedure into two subsystems: the global wing optimization which optimizes the geometry of spars, ribs and wing skins; and the local panel optimization which optimizes the design variables of local panels bordered by spars and ribs. The stiffeners are placed on the local panels to increase the stiffness and buckling resistance. During the local panel optimization, the stress information is taken from the global model as a displacement boundary condition on the panel edges using the so-called "Global-Local Approach". Particle swarm optimization is used in the integration of global/local optimization to optimize the SpaRibs. Parallel computing approach has been developed in the Python programming language to reduce the CPU time. The license cycle-check method and memory self-adjustment method are two approaches that have been applied in the parallel framework in order to optimize the use of the resources by reducing the license and memory limitations and making the code robust. The integrated global-local optimization approach has been applied to subsonic NASA common research model (CRM) wing, which proves the methodology's application scaling with medium fidelity FEM analysis. The structural weight of the wing has been reduced by 42% and the parallel implementation allowed a reduction in the CPU time by 89%. The aforementioned Global-Local Approach is investigated and applied to a composite panel with crack at its center. Because of composite laminates' heterogeneity, an accurate analysis of these requires very high time and storage space. A possible alternative to reduce the computational complexity is the global-local analysis which involves an approximate analysis of the whole structure followed by a detailed analysis of a significantly smaller region of interest. Buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from Abaqus FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners' height is investigated. It is shown that there is an optimum value of stiffeners' height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener's height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. Results show that panels with a larger crack have a reduced buckling load and that the buckling load decreases slightly when using higher order 2D shell FEM elements. A damage tolerance framework, EBF3PanelOpt, has been developed to design and analyze curvilinearly stiffened panels. The framework is written with the scripting language Python and it interacts with the commercial software MSC. Patran (for geometry and mesh creation), MSC. Nastran (for finite element analysis), and MSC. Marc (for damage tolerance analysis). The crack location is set to the location of the maximum value of the major principal stress while its orientation is set normal to the major principal axis direction. The effective stress intensity factor is calculated using the Virtual Crack Closure Technique and compared to the fracture toughness of the material in order to decide whether the crack will expand or not. The ratio of these two quantities is used as a constraint, along with the buckling factor, Kreisselmeier and Steinhauser criteria, and crippling factor. The EBF3PanelOpt framework is integrated within a two-step Particle Swarm Optimization in order to minimize the weight of the panel while satisfying the aforementioned constraints and using all the shape and thickness parameters as design variables. The result of the PSO is used then as an initial guess for the Gradient Based Optimization using only the thickness parameters as design variables and employing VisualDOC. Stiffened panel with two curvilinear stiffeners is optimized for two load cases. In both cases, significant reduction has been made for the panel's weight.

  5. A fast direct solver for boundary value problems on locally perturbed geometries

    NASA Astrophysics Data System (ADS)

    Zhang, Yabin; Gillman, Adrianna

    2018-03-01

    Many applications including optimal design and adaptive discretization techniques involve solving several boundary value problems on geometries that are local perturbations of an original geometry. This manuscript presents a fast direct solver for boundary value problems that are recast as boundary integral equations. The idea is to write the discretized boundary integral equation on a new geometry as a low rank update to the discretized problem on the original geometry. Using the Sherman-Morrison formula, the inverse can be expressed in terms of the inverse of the original system applied to the low rank factors and the right hand side. Numerical results illustrate for problems where perturbation is localized the fast direct solver is three times faster than building a new solver from scratch.

  6. Structural optimization via a design space hierarchy

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  7. Approximation algorithms for a genetic diagnostics problem.

    PubMed

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  8. Predictions of High Strain Rate Failure Modes in Layered Aluminum Composites

    NASA Astrophysics Data System (ADS)

    Khanikar, Prasenjit; Zikry, M. A.

    2014-01-01

    A dislocation density-based crystalline plasticity formulation, specialized finite-element techniques, and rational crystallographic orientation relations were used to predict and characterize the failure modes associated with the high strain rate behavior of aluminum layered composites. Two alloy layers, a high strength alloy, aluminum 2195, and an aluminum alloy 2139, with high toughness, were modeled with representative microstructures that included precipitates, dispersed particles, and different grain boundary distributions. Different layer arrangements were investigated for high strain rate applications and the optimal arrangement was with the high toughness 2139 layer on the bottom, which provided extensive shear strain localization, and the high strength 2195 layer on the top for high strength resistance The layer thickness of the bottom high toughness layer also affected the bending behavior of the roll-bonded interface and the potential delamination of the layers. Shear strain localization, dynamic cracking, and delamination are the mutually competing failure mechanisms for the layered metallic composite, and control of these failure modes can be used to optimize behavior for high strain rate applications.

  9. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  10. Progress Towards a New Technique for Measuring Local Electric and Magnetic Field Fluctuations in High Temperature Plasmas

    NASA Astrophysics Data System (ADS)

    Burke, M. G.; Fonck, R. J.; McKee, G. R.; Winz, G. R.

    2017-10-01

    Local measurements of electrostatic and magnetic turbulence in fusion grade plasmas is a critical missing component in advancing our understanding of current experiments and validating nonlinear turbulence simulations. A novel diagnostic for measuring local electric and magnetic field fluctuations (Ẽ and B ) is being developed to address this need. It employs high-speed measurements of the spectral linewidth and/or line intensities of the Motional Stark Effect split neutral beam emission. This emission is split into several spectral components, with the amount of splitting being proportional to local magnetic and electric fields at the emission site. High spectral resolution ( 0.025 nm), high throughput ( 0.01 cm2str), and high speed (f 250 kHz) are required for the measurement of fast changes in the MSE spectrum. Spatial heterodyne spectroscopy (SHS) techniques coupled to a CMOS detector can meet these demands. A prototype SHS has been deployed to DIII-D for initial testing in the tokamak environment, SNR evaluation, and neutral beam efficacy. In addition, design studies of the SHS interferogram are ongoing to further optimize the measurement technique. One major contributor to loss of fringe contrast is line broadening arising from employing a large collection lens. This broadening can be mitigated by making the lens at the tokamak wall optically conjugate with the interference fringes image field. Work supported by US DOE Grant DE-FG02-89ER53296.

  11. Development and use of culture systems to modulate specific cell responses

    NASA Astrophysics Data System (ADS)

    Martin, Yves

    Culture surfaces that induce specific localized cell responses are required to achieve tissue-like cell growth in three-dimensional (3D) environments, as well as to develop more efficient cell-based diagnostic techniques, noticeably when working with fragile cells such as stem cells or platelets. As such, Chapter 1 of this thesis work is devoted to the review of 3D cell-material interactions in vitro and the corresponding existing culture systems available to achieve in vivo-like cell responses. More adequate 3D culture systems will need to be developed to mimic several characteristics of in vivo environments, including lowered non-specific cell-material interactions and localized biochemical signaling. The experimental work in this thesis is based on the hypothesis that well-studied and optimized surface treatments will be able to lower non-specific cell-material interactions and allow local chemical modification in order to achieve specific localized cell-material interactions for different applications. As such, in Chapter 2 and Chapter 3 of this thesis, surface treatments were developed using plasma polymerization and covalent immobilization of a low-fouling polymer (i.e., poly(ethylene glycol)) and characterized and optimized using a large number of techniques including atomic force microscopy, quartz crystal microbalance, surface plasmon resonance, x-ray photoelectron spectroscopy and fluorescence-based techniques. The main plasma polymerization parameter important for surface chemical content, specifically nitrogen to carbon content, was identified as being glow discharge power, while reaction time and power determined plasma film thickness. Moreover, plasma films were shown to be stable in aqueous environments. Covalently-bound poly(ethylene glycol) (PEG) layers physicochemical and mechanical properties are dependent on fabrication methods. Polymer concentration in solution is an important indicator of final layer properties, and use of a theta solvent induces complex aggregation phenomena in solution yielding layers with widely different properties. Chemically available primary amine groups are also shown to be present, paving the way for the immobilization of bio-active molecules. An application of low-fouling locally modified surfaces is given in Chapter 4 by the development of a novel diagnostic surface to evaluate platelet activation which is until now very difficult as platelets are readily activated by in vitro manipulations. Significant results from volunteer donors indicate that this diagnostic instrument has the potential to allow the rapid estimation of platelet activation levels in whole blood.

  12. Techniques for optimizing nanotips derived from frozen taylor cones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirsch, Gregory

    Optimization techniques are disclosed for producing sharp and stable tips/nanotips relying on liquid Taylor cones created from electrically conductive materials with high melting points. A wire substrate of such a material with a preform end in the shape of a regular or concave cone, is first melted with a focused laser beam. Under the influence of a high positive potential, a Taylor cone in a liquid/molten state is formed at that end. The cone is then quenched upon cessation of the laser power, thus freezing the Taylor cone. The tip of the frozen Taylor cone is reheated by the lasermore » to allow its precise localized melting and shaping. Tips thus obtained yield desirable end-forms suitable as electron field emission sources for a variety of applications. In-situ regeneration of the tip is readily accomplished. These tips can also be employed as regenerable bright ion sources using field ionization/desorption of introduced chemical species.« less

  13. Optimized protocol for combined PALM-dSTORM imaging.

    PubMed

    Glushonkov, O; Réal, E; Boutant, E; Mély, Y; Didier, P

    2018-06-08

    Multi-colour super-resolution localization microscopy is an efficient technique to study a variety of intracellular processes, including protein-protein interactions. This technique requires specific labels that display transition between fluorescent and non-fluorescent states under given conditions. For the most commonly used label types, photoactivatable fluorescent proteins and organic fluorophores, these conditions are different, making experiments that combine both labels difficult. Here, we demonstrate that changing the standard imaging buffer of thiols/oxygen scavenging system, used for organic fluorophores, to the commercial mounting medium Vectashield increased the number of photons emitted by the fluorescent protein mEos2 and enhanced the photoconversion rate between its green and red forms. In addition, the photophysical properties of organic fluorophores remained unaltered with respect to the standard imaging buffer. The use of Vectashield together with our optimized protocol for correction of sample drift and chromatic aberrations enabled us to perform two-colour 3D super-resolution imaging of the nucleolus and resolve its three compartments.

  14. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  15. Influence of robust optimization in intensity-modulated proton therapy with different dose delivery techniques

    PubMed Central

    Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong

    2012-01-01

    Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694

  16. Optimal structure and parameter learning of Ising models

    DOE PAGES

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...

    2018-03-16

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  17. Optimal structure and parameter learning of Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  18. Local synchronization of chaotic neural networks with sampled-data and saturating actuators.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2014-12-01

    This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.

  19. SpaRibs Geometry Parameterization for Wings with Multiple Sections using Single Design

    NASA Technical Reports Server (NTRS)

    De, Shuvodeep; Jrad, Mohamed; Locatelli, Davide; Kapania, Rakesh K.; Baker, Myles; Pak, Chan-Gi

    2017-01-01

    The SpaRibs topology of an aircraft wing has a significant effect on its structural behavior and stability as well as the flutter performance. The development of additive manufacturing techniques like Electron Beam Free Form Fabrication (EBF3) has made it feasible to manufacture aircraft wings with curvilinear spars, ribs (SpaRibs) and stiffeners. In this article a new global-local optimization framework for wing with multiple sections using curvilinear SpaRibs is described. A single design space is used to parameterize the SpaRibs geometry. This method has been implemented using MSC-PATRAN to create a broad range of SpaRibs topologies using limited number of parameters. It ensures C0 and C1 continuities in SpaRibs geometry at the junction of two wing sections with airfoil thickness gradient discontinuity as well as mesh continuity between all structural components. This method is advantageous in complex multi-disciplinary optimization due to its potential to reduce the number of design variables. For the global-local optimization the local panels are generated by an algorithm which is totally based on a set algebra on the connectivity matrix data. The great advantage of this method is that it is completely independent of the coordinates of the nodes of the finite element model. It is also independent of the order in which the elements are distributed in the FEM. The code is verified by optimizing of the CRM Baseline model at trim condition at Mach number equal to 0.85 for five different angle of attack (-2deg, 0deg,2deg,4deg and 6deg). The final weight of the wing is 19,090.61 lb. This value is comparable to that obtained by Qiang et al. 6 (19,269 lb).

  20. Inferring neural activity from BOLD signals through nonlinear optimization.

    PubMed

    Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E

    2007-11-01

    The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.

  1. Energy conservation in housing design using solar energy, mechanical system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakir, N.M.W.

    1985-01-01

    This paper presents the first experimental full-scale house built by the Solar Energy Research Center of Baghdad to be heated and cooled by solar energy. The various architectural and environmental considerations which entered into the design process are discussed, as well as the range of passive techniques examined for their compatibility with the local climate and their ability to optimize the energy efficiency of the house. The mechanical systems which were ultimately implemented are described.

  2. Coherent Doppler Lidar for Boundary Layer Studies and Wind Energy

    NASA Astrophysics Data System (ADS)

    Choukulkar, Aditya

    This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS RTM) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.

  3. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  4. Faithful test of nonlocal realism with entangled coherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chang-Woo; Jeong, Hyunseok; Paternostro, Mauro

    2011-02-15

    We investigate the violation of Leggett's inequality for nonlocal realism using entangled coherent states and various types of local measurements. We prove mathematically the relation between the violation of the Clauser-Horne-Shimony-Holt form of Bell's inequality and Leggett's one when tested by the same resources. For Leggett inequalities, we generalize the nonlocal realistic bound to systems in Hilbert spaces larger than bidimensional ones and introduce an optimization technique that allows one to achieve larger degrees of violation by adjusting the local measurement settings. Our work describes the steps that should be performed to produce a self-consistent generalization of Leggett's original argumentsmore » to continuous-variable states.« less

  5. Fabrication of locally micro-structured fiber Bragg gratings by fs-laser machining

    NASA Astrophysics Data System (ADS)

    Dutz, Franz J.; Stephan, Valentin; Marchi, Gabriele; Koch, Alexander W.; Roths, Johannes; Huber, Heinz P.

    2018-06-01

    Here, we describe a method for producing locally micro-structured fiber Bragg gratings (LMFGB) by fs-laser machining. This technique enables the precise and reproducible ablation of cladding material to create circumferential grooves inside the claddings of optical fibers. From initial ablation experiments we acquired optimized process parameters. The fabricated grooves were located in the middle of uniform type I fiber Bragg gratings. LMFBGs with four different groove widths of 48, 85, 135 and 205 μ { {m}} were produced. The grooves exhibited constant depths of about 30 μ {m} and steep sidewall angles. With the combination of micro-structures and fiber Bragg gratings, fiber optic sensor elements with enhanced functionalities can be achieved.

  6. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  7. Optimization of Residual Stresses in MMC's Using Compensating/Compliant Interfacial Layers. Part 2: OPTCOMP User's Guide

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.

    1994-01-01

    A user's guide for the computer program OPTCOMP is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in uni-directional metal matrix composites subjected to combined thermo-mechanical axisymmetric loading using compensating or compliant layers at the fiber/matrix interface. The user specifies the architecture and the initial material parameters of the interfacial region, which can be either elastic or elastoplastic, and defines the design variables, together with the objective function, the associated constraints and the loading history through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the elastoplastic response of an arbitrarily layered multiple concentric cylinder model that is coupled to the commercial optimization package DOT. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.

  8. Reirradiation of head and neck cancer using modern highly conformal techniques.

    PubMed

    Ho, Jennifer C; Phan, Jack

    2018-04-23

    Locoregional disease recurrence or development of a second primary cancer after definitive radiotherapy for head and neck cancers remains a treatment challenge. Reirradiation utilizing traditional techniques has been limited by concern for serious toxicity. With the advent of newer, more precise radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT), proton radiotherapy, and stereotactic body radiotherapy (SBRT), there has been renewed interest in curative-intent head and neck reirradiation. However, as most studies were retrospective, single-institutional experiences, the optimal modality is not clear. We provide a comprehensive review of the outcomes of relevant studies using these 3 head and neck reirradiation techniques, followed by an analysis and comparison of the toxicity, tumor control, concurrent systemic therapy, and prognostic factors. Overall, there is evidence that IMRT, proton therapy, and SBRT reirradiation are feasible treatment options that offer a chance for durable local control and survival. Prospective studies, particularly randomized trials, are needed. © 2018 Wiley Periodicals, Inc.

  9. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  10. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  11. Computing the Partition Function for Kinetically Trapped RNA Secondary Structures

    PubMed Central

    Lorenz, William A.; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972

  12. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic Dispatch.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo

    2018-06-01

    The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.

  13. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  14. A trajectory planning scheme for spacecraft in the space station environment. M.S. Thesis - University of California

    NASA Technical Reports Server (NTRS)

    Soller, Jeffrey Alan; Grunwald, Arthur J.; Ellis, Stephen R.

    1991-01-01

    Simulated annealing is used to solve a minimum fuel trajectory problem in the space station environment. The environment is special because the space station will define a multivehicle environment in space. The optimization surface is a complex nonlinear function of the initial conditions of the chase and target crafts. Small permutations in the input conditions can result in abrupt changes to the optimization surface. Since no prior knowledge about the number or location of local minima on the surface is available, the optimization must be capable of functioning on a multimodal surface. It was reported in the literature that the simulated annealing algorithm is more effective on such surfaces than descent techniques using random starting points. The simulated annealing optimization was found to be capable of identifying a minimum fuel, two-burn trajectory subject to four constraints which are integrated into the optimization using a barrier method. The computations required to solve the optimization are fast enough that missions could be planned on board the space station. Potential applications for on board planning of missions are numerous. Future research topics may include optimal planning of multi-waypoint maneuvers using a knowledge base to guide the optimization, and a study aimed at developing robust annealing schedules for potential on board missions.

  15. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  16. Workshop on Measurement Needs for Local-Structure Determination in Inorganic Materials

    PubMed Central

    Levin, Igor; Vanderah, Terrell

    2008-01-01

    The functional responses (e.g., dielectric, magnetic, catalytic, etc.) of many industrially-relevant materials are controlled by their local structure—a term that refers to the atomic arrangements on a scale ranging from atomic (sub-nanometer) to several nanometers. Thus, accurate knowledge of local structure is central to understanding the properties of nanostructured materials, thereby placing the problem of determining atomic positions on the nanoscale—the so-called “nanostructure problem”—at the center of modern materials development. Today, multiple experimental techniques exist for probing local atomic arrangements; nonetheless, finding accurate comprehensive, and robust structural solutions for the nanostructured materials still remains a formidable challenge because any one of these methods yields only a partial view of the local structure. The primary goal of this 2-day NIST-sponsored workshop was to bring together experts in the key experimental and theoretical areas relevant to local-structure determination to devise a strategy for the collaborative effort required to develop a comprehensive measurement solution on the local scale. The participants unanimously agreed that solving the nanostructure problem—an ultimate frontier in materials characterization—necessitates a coordinated interdisciplinary effort that transcends the existing capabilities of any single institution, including national laboratories, centers, and user facilities. The discussions converged on an institute dedicated to local structure determination as the most viable organizational platform for successfully addressing the nanostructure problem. The proposed “institute” would provide an intellectual infrastructure for local structure determination by (1) developing and maintaining relevant computer software integrated in an open-source global optimization framework (Fig. 2), (2) connecting industrial and academic users with experts in measurement techniques, (3) developing and maintaining pertinent databases, and (4) providing necessary education and training. PMID:27096131

  17. Rapid inverse planning for pressure-driven drug infusions in the brain.

    PubMed

    Rosenbluth, Kathryn H; Martin, Alastair J; Mittermeyer, Stephan; Eschermann, Jan; Dickinson, Peter J; Bankiewicz, Krystof S

    2013-01-01

    Infusing drugs directly into the brain is advantageous to oral or intravenous delivery for large molecules or drugs requiring high local concentrations with low off-target exposure. However, surgeons manually planning the cannula position for drug delivery in the brain face a challenging three-dimensional visualization task. This study presents an intuitive inverse-planning technique to identify the optimal placement that maximizes coverage of the target structure while minimizing the potential for leakage outside the target. The technique was retrospectively validated using intraoperative magnetic resonance imaging of infusions into the striatum of non-human primates and into a tumor in a canine model and applied prospectively to upcoming human clinical trials.

  18. A relative performance analysis of atmospheric Laser Doppler Velocimeter methods.

    NASA Technical Reports Server (NTRS)

    Farmer, W. M.; Hornkohl, J. O.; Brayton, D. B.

    1971-01-01

    Evaluation of the effectiveness of atmospheric applications of a Laser Doppler Velocimeter (LDV) at a wavelength of about 0.5 micrometer in conjunction with dual scatter LDV illuminating techniques, or at a wavelength of 10.6 micrometer with local oscillator LDV illuminating techniques. Equations and examples are given to provide a quantitative basis for LDV system selection and performance criteria in atmospheric research. The comparative study shows that specific ranges and conditions exist where performance of one of the methods is superior to that of the other. It is also pointed out that great care must be exercised in choosing system parameters that optimize a particular LDV designed for atmospheric applications.

  19. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  20. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  1. Enhanced Energy Localization in Hyperthermia Treatment Based on Hybrid Electromagnetic and Ultrasonic System: Proof of Concept with Numerical Simulations.

    PubMed

    Nizam-Uddin, N; Elshafiey, Ibrahim

    2017-01-01

    This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors.

  2. Enhanced Energy Localization in Hyperthermia Treatment Based on Hybrid Electromagnetic and Ultrasonic System: Proof of Concept with Numerical Simulations

    PubMed Central

    Elshafiey, Ibrahim

    2017-01-01

    This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors. PMID:28840125

  3. The GISS sounding temperature impact test

    NASA Technical Reports Server (NTRS)

    Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.

    1978-01-01

    The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.

  4. Hybrid water flow-like algorithm with Tabu search for traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Bostamam, Jasmin M.; Othman, Zulaiha

    2016-08-01

    This paper presents a hybrid Water Flow-like Algorithm with Tabu Search for solving travelling salesman problem (WFA-TS-TSP).WFA has been proven its outstanding performances in solving TSP meanwhile TS is a conventional algorithm which has been used since decades to solve various combinatorial optimization problem including TSP. Hybridization between WFA with TS provides a better balance of exploration and exploitation criteria which are the key elements in determining the performance of one metaheuristic. TS use two different local search namely, 2opt and 3opt separately. The proposed WFA-TS-TSP is tested on 23 sets on the well-known benchmarked symmetric TSP instances. The result shows that the proposed WFA-TS-TSP has significant better quality solutions compared to WFA. The result also shows that the WFA-TS-TSP with 3-opt obtained the best quality solution. With the result obtained, it could be concluded that WFA has potential to be further improved by using hybrid technique or using better local search technique.

  5. A class of systolizable IIR digital filters and its design for proper scaling and minimum output roundoff noise

    NASA Technical Reports Server (NTRS)

    Lei, Shaw-Min; Yao, Kung

    1990-01-01

    A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.

  6. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Staging Liver Fibrosis with Statistical Observers

    NASA Astrophysics Data System (ADS)

    Brand, Jonathan Frieman

    Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically on order of 1mm, which close to the resolution limit of in vivo Gd-enhanced MRI. In this work the methods to collect training and testing images for a Hotelling observer are covered. An observer based on local texture analysis is trained and tested using wet-tissue phantoms. The technique is used to optimize the MRI sequence based on task performance. The final method developed is a two stage model observer to classify fibrotic and healthy tissue in both phantoms and in vivo MRI images. The first stage observer tests for the presence of local texture. Test statistics from the first observer are used to train the second stage observer to globally sample the local observer results. A decision of the disease class is made for an entire MRI image slice using test statistics collected from the second observer. The techniques are tested on wet-tissue phantoms and in vivo clinical patient data.

  8. Profile shape optimization in multi-jet impingement cooling of dimpled topologies for local heat transfer enhancement

    NASA Astrophysics Data System (ADS)

    Negi, Deepchand Singh; Pattamatta, Arvind

    2015-04-01

    The present study deals with shape optimization of dimples on the target surface in multi-jet impingement heat transfer. Bezier polynomial formulation is incorporated to generate profile shapes for the dimple profile generation and a multi-objective optimization is performed. The optimized dimple shape exhibits higher local Nusselt number values compared to the reference hemispherical dimpled plate optimized shape which can be used to alleviate local temperature hot spots on target surface.

  9. Intramedullary Fixation of Midshaft Clavicle Fractures.

    PubMed

    Fritz, Erik M; van der Meijden, Olivier A; Hussain, Zaamin B; Pogorzelski, Jonas; Millett, Peter J

    2017-08-01

    Clavicle fractures are among the most common fractures occurring in the general population, and the vast majority are localized in the midshaft portion of the bone. Management of midshaft clavicle fractures remains controversial. Although many can be managed nonoperatively, certain patient populations and fracture patterns, such as completely displaced and shortened fractures, are at risk of less optimal outcomes with nonoperative management; surgical intervention should be considered in such cases. The purpose of this article is to demonstrate our technique of midshaft clavicle fixation using minimally invasive intramedullary fixation.

  10. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  11. Synthesizing epidemiological and economic optima for control of immunizing infections.

    PubMed

    Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T

    2011-08-23

    Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.

  12. Further Improvement in Outcomes of Nasopharyngeal Carcinoma With Optimized Radiotherapy and Induction Plus Concomitant Chemotherapy: An Update of the Milan Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palazzi, Mauro; Orlandi, Ester; Bossi, Paolo

    2009-07-01

    Purpose: To report the outcome of a consecutive series of patients with nonmetastatic nasopharyngeal carcinoma (NPC), focusing on the impact of treatment-related factors. Methods and Materials: Between 2000 and 2006, 87 patients with NPC were treated with either conventional (two- or three-dimensional) radiotherapy (RT) or with intensity-modulated RT (IMRT). Of these patients, 81 (93%) received either concomitant CHT (24%) or both induction and concomitant chemotherapy (CHT) (69%). Stage was III in 36% and IV in 39% of patients. Outcomes in this study population were compared with those in the previous series of 171 patients treated during 1990 to 1999. Results:more » With a median follow-up of 46 months, actuarial rates at 3 years were the following: local control, 96%; local-regional control, 93%; distant control (DC), 90%; disease-free survival (DFS), 82%; overall survival, 90%. In Stage III to IV patients, distant control at 3 years was 56% in patients treated with concomitant CHT only and 92% in patients treated with both induction and concomitant CHT (p = 0.014). At multivariate analysis, histology, N-stage, RT technique, and total RT dose had the strongest independent impact on DFS (p < 0.05). Induction CHT had a borderline effect on DC (p = 0.07). Most dosimetric statistics were improved in the group of patients treated with IMRT compared with conventional 3D technique. All outcome endpoints were substantially better in the study population compared with those in the previous series. Conclusions: Outcome of NPC has further improved in the study period compared with the previous decade, with a significant effect of RT technique optimization. The impact of induction CHT remains to be demonstrated in controlled trials.« less

  13. Quantitative Characterization of Nanostructured Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Frank

    The two-and-a-half day symposium on the "Quantitative Characterization of Nanostructured Materials" will be the first comprehensive meeting on this topic held under the auspices of a major U.S. professional society. Spring MRS Meetings provide a natural venue for this symposium as they attract a broad audience of researchers that represents a cross-section of the state-of-the-art regarding synthesis, structure-property relations, and applications of nanostructured materials. Close interactions among the experts in local structure measurements and materials researchers will help both to identify measurement needs pertinent to real-world materials problems and to familiarize the materials research community with the state-of-the-art local structuremore » measurement techniques. We have chosen invited speakers that reflect the multidisciplinary and international nature of this topic and the need to continually nurture productive interfaces among university, government and industrial laboratories. The intent of the symposium is to provide an interdisciplinary forum for discussion and exchange of ideas on the recent progress in quantitative characterization of structural order in nanomaterials using different experimental techniques and theory. The symposium is expected to facilitate discussions on optimal approaches for determining atomic structure at the nanoscale using combined inputs from multiple measurement techniques.« less

  14. Recursive grid partitioning on a cortical surface model: an optimized technique for the localization of implanted subdural electrodes.

    PubMed

    Pieters, Thomas A; Conner, Christopher R; Tandon, Nitin

    2013-05-01

    Precise localization of subdural electrodes (SDEs) is essential for the interpretation of data from intracranial electrocorticography recordings. Blood and fluid accumulation underneath the craniotomy flap leads to a nonlinear deformation of the brain surface and of the SDE array on postoperative CT scans and adversely impacts the accurate localization of electrodes located underneath the craniotomy. Older methods that localize electrodes based on their identification on a postimplantation CT scan with coregistration to a preimplantation MR image can result in significant problems with accuracy of the electrode localization. The authors report 3 novel methods that rely on the creation of a set of 3D mesh models to depict the pial surface and a smoothed pial envelope. Two of these new methods are designed to localize electrodes, and they are compared with 6 methods currently in use to determine their relative accuracy and reliability. The first method involves manually localizing each electrode using digital photographs obtained at surgery. This is highly accurate, but requires time intensive, operator-dependent input. The second uses 4 electrodes localized manually in conjunction with an automated, recursive partitioning technique to localize the entire electrode array. The authors evaluated the accuracy of previously published methods by applying the methods to their data and comparing them against the photograph-based localization. Finally, the authors further enhanced the usability of these methods by using automatic parcellation techniques to assign anatomical labels to individual electrodes as well as by generating an inflated cortical surface model while still preserving electrode locations relative to the cortical anatomy. The recursive grid partitioning had the least error compared with older methods (672 electrodes, 6.4-mm maximum electrode error, 2.0-mm mean error, p < 10(-18)). The maximum errors derived using prior methods of localization ranged from 8.2 to 11.7 mm for an individual electrode, with mean errors ranging between 2.9 and 4.1 mm depending on the method used. The authors also noted a larger error in all methods that used CT scans alone to localize electrodes compared with those that used both postoperative CT and postoperative MRI. The large mean errors reported with these methods are liable to affect intermodal data comparisons (for example, with functional mapping techniques) and may impact surgical decision making. The authors have presented several aspects of using new techniques to visualize electrodes implanted for localizing epilepsy. The ability to use automated labeling schemas to denote which gyrus a particular electrode overlies is potentially of great utility in planning resections and in corroborating the results of extraoperative stimulation mapping. Dilation of the pial mesh model provides, for the first time, a sense of the cortical surface not sampled by the electrode, and the potential roles this "electrophysiologically hidden" cortex may play in both eloquent function and seizure onset.

  15. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  16. Analysis and Optimization of Building Energy Consumption

    NASA Astrophysics Data System (ADS)

    Chuah, Jun Wei

    Energy is one of the most important resources required by modern human society. In 2010, energy expenditures represented 10% of global gross domestic product (GDP). By 2035, global energy consumption is expected to increase by more than 50% from current levels. The increased pace of global energy consumption leads to significant environmental and socioeconomic issues: (i) carbon emissions, from the burning of fossil fuels for energy, contribute to global warming, and (ii) increased energy expenditures lead to reduced standard of living. Efficient use of energy, through energy conservation measures, is an important step toward mitigating these effects. Residential and commercial buildings represent a prime target for energy conservation, comprising 21% of global energy consumption and 40% of the total energy consumption in the United States. This thesis describes techniques for the analysis and optimization of building energy consumption. The thesis focuses on building retrofits and building energy simulation as key areas in building energy optimization and analysis. The thesis first discusses and evaluates building-level renewable energy generation as a solution toward building energy optimization. The thesis next describes a novel heating system, called localized heating. Under localized heating, building occupants are heated individually by directed radiant heaters, resulting in a considerably reduced heated space and significant heating energy savings. To support localized heating, a minimally-intrusive indoor occupant positioning system is described. The thesis then discusses occupant-level sensing (OLS) as the next frontier in building energy optimization. OLS captures the exact environmental conditions faced by each building occupant, using sensors that are carried by all building occupants. The information provided by OLS enables fine-grained optimization for unprecedented levels of energy efficiency and occupant comfort. The thesis also describes a retrofit-oriented building energy simulator, ROBESim, that natively supports building retrofits. ROBESim extends existing building energy simulators by providing a platform for the analysis of novel retrofits, in addition to simulating existing retrofits. Using ROBESim, retrofits can be automatically applied to buildings, obviating the need for users to manually update building characteristics for comparisons between different building retrofits. ROBESim also includes several ease-of-use enhancements to support users of all experience levels.

  17. Machine Learning Force Field Parameters from Ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ying; Li, Hui; Pickard, Frank C.

    Machine learning (ML) techniques with the genetic algorithm (GA) have been applied to determine a polarizable force field parameters using only ab initio data from quantum mechanics (QM) calculations of molecular clusters at the MP2/6-31G(d,p), DFMP2(fc)/jul-cc-pVDZ, and DFMP2(fc)/jul-cc-pVTZ levels to predict experimental condensed phase properties (i.e., density and heat of vaporization). The performance of this ML/GA approach is demonstrated on 4943 dimer electrostatic potentials and 1250 cluster interaction energies for methanol. Excellent agreement between the training data set from QM calculations and the optimized force field model was achieved. The results were further improved by introducing an offset factor duringmore » the machine learning process to compensate for the discrepancy between the QM calculated energy and the energy reproduced by optimized force field, while maintaining the local “shape” of the QM energy surface. Throughout the machine learning process, experimental observables were not involved in the objective function, but were only used for model validation. The best model, optimized from the QM data at the DFMP2(fc)/jul-cc-pVTZ level, appears to perform even better than the original AMOEBA force field (amoeba09.prm), which was optimized empirically to match liquid properties. The present effort shows the possibility of using machine learning techniques to develop descriptive polarizable force field using only QM data. The ML/GA strategy to optimize force fields parameters described here could easily be extended to other molecular systems.« less

  18. Clustering methods for the optimization of atomic cluster structure

    NASA Astrophysics Data System (ADS)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  19. Multi-length scale tomography for the determination and optimization of the effective microstructural properties in novel hierarchical solid oxide fuel cell anodes

    NASA Astrophysics Data System (ADS)

    Lu, Xuekun; Taiwo, Oluwadamilola O.; Bertei, Antonio; Li, Tao; Li, Kang; Brett, Dan J. L.; Shearing, Paul R.

    2017-11-01

    Effective microstructural properties are critical in determining the electrochemical performance of solid oxide fuel cells (SOFCs), particularly when operating at high current densities. A novel tubular SOFC anode with a hierarchical microstructure, composed of self-organized micro-channels and sponge-like regions, has been fabricated by a phase inversion technique to mitigate concentration losses. However, since pore sizes span over two orders of magnitude, the determination of the effective transport parameters using image-based techniques remains challenging. Pioneering steps are made in this study to characterize and optimize the microstructure by coupling multi-length scale 3D tomography and modeling. The results conclusively show that embedding finger-like micro-channels into the tubular anode can improve the mass transport by 250% and the permeability by 2-3 orders of magnitude. Our parametric study shows that increasing the porosity in the spongy layer beyond 10% enhances the effective transport parameters of the spongy layer at an exponential rate, but linearly for the full anode. For the first time, local and global mass transport properties are correlated to the microstructure, which is of wide interest for rationalizing the design optimization of SOFC electrodes and more generally for hierarchical materials in batteries and membranes.

  20. Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2015-09-21

    Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient's anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant's RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B(1)(+) field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient's anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.

  1. Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2015-09-01

    Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient’s anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant’s RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B1+ field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient’s anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.

  2. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    PubMed

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  4. Electron Beam Freeform Fabrication of Titanium Alloy Gradient Structures

    NASA Technical Reports Server (NTRS)

    Brice, Craig A.; Newman, John A.; Bird, Richard Keith; Shenoy, Ravi N.; Baughman, James M.; Gupta, Vipul K.

    2014-01-01

    Historically, the structural optimization of aerospace components has been done through geometric methods. A monolithic material is chosen based on the best compromise between the competing design limiting criteria. Then the structure is geometrically optimized to give the best overall performance using the single material chosen. Functionally graded materials offer the potential to further improve structural efficiency by allowing the material composition and/or microstructural features to spatially vary within a single structure. Thus, local properties could be tailored to the local design limiting criteria. Additive manufacturing techniques enable the fabrication of such graded materials and structures. This paper presents the results of a graded material study using two titanium alloys processed using electron beam freeform fabrication, an additive manufacturing process. The results show that the two alloys uniformly mix at various ratios and the resultant static tensile properties of the mixed alloys behave according to rule-of-mixtures. Additionally, the crack growth behavior across an abrupt change from one alloy to the other shows no discontinuity and the crack smoothly transitions from one crack growth regime into another.

  5. Transient gene expression in epidermal cells of plant leaves by biolistic DNA delivery.

    PubMed

    Ueki, Shoko; Magori, Shimpei; Lacroix, Benoît; Citovsky, Vitaly

    2013-01-01

    Transient gene expression is a useful approach for studying the functions of gene products. In the case of plants, Agrobacterium infiltration is a method of choice for transient introduction of genes for many species. However, this technique does not work efficiently in some species, such as Arabidopsis thaliana. Moreover, the infection of Agrobacterium is known to induce dynamic changes in gene expression patterns in the host plants, possibly affecting the function and localization of the proteins to be tested. These problems can be circumvented by biolistic delivery of the genes of interest. Here, we present an optimized protocol for biolistic delivery of plasmid DNA into epidermal cells of plant leaves, which can be easily performed using the Bio-Rad Helios gene gun system. This protocol allows efficient and reproducible transient expression of diverse genes in Arabidopsis, Nicotiana benthamiana and N. tabacum, and is suitable for studies of the biological function and subcellular localization of the gene products directly in planta. The protocol also can be easily adapted to other species by optimizing the delivery gas pressure.

  6. Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features

    PubMed Central

    Toews, Matthew; Wells, William M.

    2013-01-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799

  7. Accelerating atomic structure search with cluster regularization

    NASA Astrophysics Data System (ADS)

    Sørensen, K. H.; Jørgensen, M. S.; Bruix, A.; Hammer, B.

    2018-06-01

    We present a method for accelerating the global structure optimization of atomic compounds. The method is demonstrated to speed up the finding of the anatase TiO2(001)-(1 × 4) surface reconstruction within a density functional tight-binding theory framework using an evolutionary algorithm. As a key element of the method, we use unsupervised machine learning techniques to categorize atoms present in a diverse set of partially disordered surface structures into clusters of atoms having similar local atomic environments. Analysis of more than 1000 different structures shows that the total energy of the structures correlates with the summed distances of the atomic environments to their respective cluster centers in feature space, where the sum runs over all atoms in each structure. Our method is formulated as a gradient based minimization of this summed cluster distance for a given structure and alternates with a standard gradient based energy minimization. While the latter minimization ensures local relaxation within a given energy basin, the former enables escapes from meta-stable basins and hence increases the overall performance of the global optimization.

  8. A novel cooperative localization algorithm using enhanced particle filter technique in maritime search and rescue wireless sensor network.

    PubMed

    Wu, Huafeng; Mei, Xiaojun; Chen, Xinqiang; Li, Junjun; Wang, Jun; Mohapatra, Prasant

    2018-07-01

    Maritime search and rescue (MSR) play a significant role in Safety of Life at Sea (SOLAS). However, it suffers from scenarios that the measurement information is inaccurate due to wave shadow effect when utilizing wireless Sensor Network (WSN) technology in MSR. In this paper, we develop a Novel Cooperative Localization Algorithm (NCLA) in MSR by using an enhanced particle filter method to reduce measurement errors on observation model caused by wave shadow effect. First, we take into account the mobility of nodes at sea to develop a motion model-Lagrangian model. Furthermore, we introduce both state model and observation model to constitute a system model for particle filter (PF). To address the impact of the wave shadow effect on the observation model, we develop an optimal parameter derived by Kullback-Leibler divergence (KLD) to mitigate the error. After the optimal parameter is acquired, an improved likelihood function is presented. Finally, the estimated position is acquired. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Chopped random-basis quantum optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  10. Targeted gene delivery to the synovial pannus in antigen-induced arthritis by ultrasound-targeted microbubble destruction in vivo.

    PubMed

    Xiang, Xi; Tang, Yuanjiao; Leng, Qianying; Zhang, Lingyan; Qiu, Li

    2016-02-01

    The purpose of this study was to optimize an ultrasound-targeted microbubble destruction (UTMD) technique to improve the in vivo transfection efficiency of the gene encoding enhanced green fluorescent protein (EGFP) in the synovial pannus in an antigen-induced arthritis rabbit model. A mixture of microbubbles and plasmids was locally injected into the knee joints of an antigen-induced arthritis (AIA) rabbits. The plasmid concentrations and ultrasound conditions were varied in the experiments. We also tested local articular and intravenous injections. The rabbits were divided into five groups: (1) ultrasound+microbubbles+plasmid; (2) ultrasound+plasmid; (3) microbubble+plasmid; (4) plasmid only; (5) untreated controls. EGFP expression was observed by fluorescent microscope and immunohistochemical staining in the synovial pannus of each group. The optimal plasmid dosage and ultrasound parameter were determined based on the results of EGFP expression and the present and absent of tissue damage under light microscopy. The irradiation procedure was performed to observe the duration of the EGFP expression in the synovial pannus and other tissues and organs, as well as the damage to the normal cells. The optimal condition was determined to be a 1-MHz ultrasound pulse applied for 5 min with a power output of 2 W/cm(2) and a 20% duty cycle along with 300 μg of plasmid. Under these conditions, the synovial pannus showed significant EGFP expression without significant damage to the surrounding normal tissue. The EGFP expression induced by the local intra-articular injection was significantly more increased than that induced by the intravenous injection. The EGFP expression in the synovial pannus of the ultrasound+microbubbles+plasmid group was significantly higher than that of the other four groups (P<0.05). The expression peaked on day 5, remained detectable on day 40 and disappeared on day 60. No EGFP expression was detected in the other tissues and organs. The UTMD technique can significantly enhance the in vivo gene transfection efficiency without significant tissue damage in the synovial pannus of an AIA model. Thus, this could become a safe and effective non-viral gene transfection procedure for arthritis therapy. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Strategies for Fermentation Medium Optimization: An In-Depth Review

    PubMed Central

    Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.

    2017-01-01

    Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566

  12. Melting Heat in Radiative Flow of Carbon Nanotubes with Homogeneous-Heterogeneous Reactions

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Muhammad, Khursheed; Muhammad, Taseer; Alsaedi, Ahmed

    2018-04-01

    The present article provides mathematical modeling for melting heat and thermal radiation in stagnation-point flow of carbon nanotubes towards a nonlinear stretchable surface of variable thickness. The process of homogeneous-heterogeneous reactions is considered. Diffusion coefficients are considered equal for both reactant and autocatalyst. Water and gasoline oil are taken as base fluids. The conversion of partial differential system to ordinary differential system is done by suitable transformations. Optimal homotopy technique is employed for the solutions development of velocity, temperature, concentration, skin friction and local Nusselt number. Graphical results for various values of pertinent parameters are displayed and discussed. Our results indicate that the skin friction coefficient and local Nusselt number are enhanced for larger values of nanoparticles volume fraction.

  13. Local melting to design strong and plastically deformable bulk metallic glass composites

    PubMed Central

    Qin, Yue-Sheng; Han, Xiao-Liang; Song, Kai-Kai; Tian, Yu-Hao; Peng, Chuan-Xiao; Wang, Li; Sun, Bao-An; Wang, Gang; Kaban, Ivan; Eckert, Jürgen

    2017-01-01

    Recently, CuZr-based bulk metallic glass (BMG) composites reinforced by the TRIP (transformation-induced plasticity) effect have been explored in attempt to accomplish an optimal of trade-off between strength and ductility. However, the design of such BMG composites with advanced mechanical properties still remains a big challenge for materials engineering. In this work, we proposed a technique of instantaneously and locally arc-melting BMG plate to artificially induce the precipitation of B2 crystals in the glassy matrix and then to tune mechanical properties. Through adjusting local melting process parameters (i.e. input powers, local melting positions, and distances between the electrode and amorphous plate), the size, volume fraction, and distribution of B2 crystals were well tailored and the corresponding formation mechanism was clearly clarified. The resultant BMG composites exhibit large compressive plasticity and high strength together with obvious work-hardening ability. This compelling approach could be of great significance for the steady development of metastable CuZr-based alloys with excellent mechanical properties. PMID:28211890

  14. 3D Printing Optical Engine for Controlling Material Microstructure

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Chin; Chang, Kuang-Po; Wu, Ping-Han; Wu, Chih-Hsien; Lin, Ching-Chih; Chuang, Chuan-Sheng; Lin, De-Yau; Liu, Sung-Ho; Horng, Ji-Bin; Tsau, Fang-Hei

    Controlling the cooling rate of alloy during melting and resolidification is the most commonly used method for varying the material microstructure and consequently the resuling property. However, the cooling rate of a selective laser melting (SLM) production is restricted by a preset optimal parameter of a good dense product. The head room for locally manipulating material property in a process is marginal. In this study, we invent an Optical Engine for locally controlling material microstructure in a SLM process. It develops an invovative method to control and adjust thermal history of the solidification process to gain desired material microstucture and consequently drastically improving the quality. Process parameters selected locally for specific materials requirement according to designed characteristics by using thermal dynamic principles of solidification process. It utilize a technique of complex laser beam shape of adaptive irradiation profile to permit local control of material characteristics as desired. This technology could be useful for industrial application of medical implant, aerospace and automobile industries.

  15. Multiview Locally Linear Embedding for Effective Medical Image Retrieval

    PubMed Central

    Shen, Hualei; Tao, Dacheng; Ma, Dianfu

    2013-01-01

    Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277

  16. Misaligned Image Integration With Local Linear Model.

    PubMed

    Baba, Tatsuya; Matsuoka, Ryo; Shirai, Keiichiro; Okuda, Masahiro

    2016-05-01

    We present a new image integration technique for a flash and long-exposure image pair to capture a dark scene without incurring blurring or noisy artifacts. Most existing methods require well-aligned images for the integration, which is often a burdensome restriction in practical use. We address this issue by locally transferring the colors of the flash images using a small fraction of the corresponding pixels in the long-exposure images. We formulate the image integration as a convex optimization problem with the local linear model. The proposed method makes it possible to integrate the color of the long-exposure image with the detail of the flash image without causing any harmful effects to its contrast, where we do not need perfect alignment between the images by virtue of our new integration principle. We show that our method successfully outperforms the state of the art in the image integration and reference-based color transfer for challenging misaligned data sets.

  17. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  18. Free Convection Nanofluid Flow in the Stagnation-Point Region of a Three-Dimensional Body

    PubMed Central

    Farooq, Umer

    2014-01-01

    Analytical results are presented for a steady three-dimensional free convection flow in the stagnation point region over a general curved isothermal surface placed in a nanofluid. The momentum equations in x- and y-directions, energy balance equation, and nanoparticle concentration equation are reduced to a set of four fully coupled nonlinear differential equations under appropriate similarity transformations. The well known technique optimal homotopy analysis method (OHAM) is used to obtain the exact solution explicitly, whose convergence is then checked in detail. Besides, the effects of the physical parameters, such as the Lewis number, the Brownian motion parameter, the thermophoresis parameter, and the buoyancy ratio on the profiles of velocities, temperature, and concentration, are studied and discussed. Furthermore the local skin friction coefficients in x- and y-directions, the local Nusselt number, and the local Sherwood number are examined for various values of the physical parameters. PMID:25114954

  19. Local adaptive contrast enhancement for color images

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer

    2007-04-01

    A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.

  20. A quantitative microscopic approach to predict local recurrence based on in vivo intraoperative imaging of sarcoma tumor margins

    PubMed Central

    Mueller, Jenna L.; Fu, Henry L.; Mito, Jeffrey K.; Whitley, Melodi J.; Chitalia, Rhea; Erkanli, Alaattin; Dodd, Leslie; Cardona, Diana M.; Geradts, Joseph; Willett, Rebecca M.; Kirsch, David G.; Ramanujam, Nimmi

    2015-01-01

    The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence. PMID:25994353

  1. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  2. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem

    PubMed Central

    Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662

  3. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    PubMed

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  4. Artificial neural network assisted kinetic spectrophotometric technique for simultaneous determination of paracetamol and p-aminophenol in pharmaceutical samples using localized surface plasmon resonance band of silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Khodaveisi, Javad; Dadfarnia, Shayessteh; Haji Shabani, Ali Mohammad; Rohani Moghadam, Masoud; Hormozi-Nezhad, Mohammad Reza

    2015-03-01

    Spectrophotometric analysis method based on the combination of the principal component analysis (PCA) with the feed-forward neural network (FFNN) and the radial basis function network (RBFN) was proposed for the simultaneous determination of paracetamol (PAC) and p-aminophenol (PAP). This technique relies on the difference between the kinetic rates of the reactions between analytes and silver nitrate as the oxidizing agent in the presence of polyvinylpyrrolidone (PVP) which is the stabilizer. The reactions are monitored at the analytical wavelength of 420 nm of the localized surface plasmon resonance (LSPR) band of the formed silver nanoparticles (Ag-NPs). Under the optimized conditions, the linear calibration graphs were obtained in the concentration range of 0.122-2.425 μg mL-1 for PAC and 0.021-5.245 μg mL-1 for PAP. The limit of detection in terms of standard approach (LODSA) and upper limit approach (LODULA) were calculated to be 0.027 and 0.032 μg mL-1 for PAC and 0.006 and 0.009 μg mL-1 for PAP. The important parameters were optimized for the artificial neural network (ANN) models. Statistical parameters indicated that the ability of the both methods is comparable. The proposed method was successfully applied to the simultaneous determination of PAC and PAP in pharmaceutical preparations.

  5. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  6. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  7. Summary of Optimization Techniques That Can Be Applied to Suspension System Design

    DOT National Transportation Integrated Search

    1973-03-01

    Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...

  8. Optimal harvesting for a predator-prey agent-based model using difference equations.

    PubMed

    Oremland, Matthew; Laubenbacher, Reinhard

    2015-03-01

    In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.

  9. Quantitative local analysis of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Topcu, Ufuk

    This thesis investigates quantitative methods for local robustness and performance analysis of nonlinear dynamical systems with polynomial vector fields. We propose measures to quantify systems' robustness against uncertainties in initial conditions (regions-of-attraction) and external disturbances (local reachability/gain analysis). S-procedure and sum-of-squares relaxations are used to translate Lyapunov-type characterizations to sum-of-squares optimization problems. These problems are typically bilinear/nonconvex (due to local analysis rather than global) and their size grows rapidly with state/uncertainty space dimension. Our approach is based on exploiting system theoretic interpretations of these optimization problems to reduce their complexity. We propose a methodology incorporating simulation data in formal proof construction enabling more reliable and efficient search for robustness and performance certificates compared to the direct use of general purpose solvers. This technique is adapted both to region-of-attraction and reachability analysis. We extend the analysis to uncertain systems by taking an intentionally simplistic and potentially conservative route, namely employing parameter-independent rather than parameter-dependent certificates. The conservatism is simply reduced by a branch-and-hound type refinement procedure. The main thrust of these methods is their suitability for parallel computing achieved by decomposing otherwise challenging problems into relatively tractable smaller ones. We demonstrate proposed methods on several small/medium size examples in each chapter and apply each method to a benchmark example with an uncertain short period pitch axis model of an aircraft. Additional practical issues leading to a more rigorous basis for the proposed methodology as well as promising further research topics are also addressed. We show that stability of linearized dynamics is not only necessary but also sufficient for the feasibility of the formulations in region-of-attraction analysis. Furthermore, we generalize an upper bound refinement procedure in local reachability/gain analysis which effectively generates non-polynomial certificates from polynomial ones. Finally, broader applicability of optimization-based tools stringently depends on the availability of scalable/hierarchial algorithms. As an initial step toward this direction, we propose a local small-gain theorem and apply to stability region analysis in the presence of unmodeled dynamics.

  10. Edge control in CNC polishing, paper 2: simulation and validation of tool influence functions on edges.

    PubMed

    Li, Hongyu; Walker, David; Yu, Guoyu; Sayle, Andrew; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony

    2013-01-14

    Edge mis-figure is regarded as one of the most difficult technical issues for manufacturing the segments of extremely large telescopes, which can dominate key aspects of performance. A novel edge-control technique has been developed, based on 'Precessions' polishing technique and for which accurate and stable edge tool influence functions (TIFs) are crucial. In the first paper in this series [D. Walker Opt. Express 20, 19787-19798 (2012)], multiple parameters were experimentally optimized using an extended set of experiments. The first purpose of this new work is to 'short circuit' this procedure through modeling. This also gives the prospect of optimizing local (as distinct from global) polishing for edge mis-figure, now under separate development. This paper presents a model that can predict edge TIFs based on surface-speed profiles and pressure distributions over the polishing spot at the edge of the part, the latter calculated by finite element analysis and verified by direct force measurement. This paper also presents a hybrid-measurement method for edge TIFs to verify the simulation results. Experimental and simulation results show good agreement.

  11. A novel method for characterizing the impact response of functionally graded plates

    NASA Astrophysics Data System (ADS)

    Larson, Reid A.

    Functionally graded material (FGM) plates are advanced composites with properties that vary continuously through the thickness of the plate. Metal-ceramic FGM plates have been proposed for use in thermal protection systems where a metal-rich interior surface of the plate gradually transitions to a ceramic-rich exterior surface of the plate. The ability of FGMs to resist impact loads must be demonstrated before using them in high-temperature environments in service. This dissertation presents a novel technique by which the impact response of FGM plates is characterized for low-velocity, low- to medium-energy impact loads. An experiment was designed where strain histories in FGM plates were collected during impact events. These strain histories were used to validate a finite element simulation of the test. A parameter estimation technique was developed to estimate local material properties in the anisotropic, non-homogenous FGM plates to optimize the finite element simulations. The optimized simulations captured the physics of the impact events. The method allows research & design engineers to make informed decisions necessary to implement FGM plates in aerospace platforms.

  12. A robust automated left ventricle region of interest localization technique using a cardiac cine MRI atlas

    NASA Astrophysics Data System (ADS)

    Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.

  13. Optimizing the Galileo space communication link

    NASA Technical Reports Server (NTRS)

    Statman, J. I.

    1994-01-01

    The Galileo mission was originally designed to investigate Jupiter and its moons utilizing a high-rate, X-band (8415 MHz) communication downlink with a maximum rate of 134.4 kb/sec. However, following the failure of the high-gain antenna (HGA) to fully deploy, a completely new communication link design was established that is based on Galileo's S-band (2295 MHz), low-gain antenna (LGA). The new link relies on data compression, local and intercontinental arraying of antennas, a (14,1/4) convolutional code, a (255,M) variable-redundancy Reed-Solomon code, decoding feedback, and techniques to reprocess recorded data to greatly reduce data losses during signal acquisition. The combination of these techniques will enable return of significant science data from the mission.

  14. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  15. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  16. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  17. Impulsive time-free transfers between halo orbits

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    1992-08-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristics velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  18. Impulsive Time-Free Transfers Between Halo Orbits

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1996-12-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L 1 libration point of the Sun-Earth/Moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristic velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  19. Effective Clipart Image Vectorization through Direct Optimization of Bezigons.

    PubMed

    Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian

    2016-02-01

    Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.

  20. Ambient Mass Spectrometry Imaging Using Direct Liquid Extraction Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laskin, Julia; Lanekoff, Ingela

    2015-11-13

    Mass spectrometry imaging (MSI) is a powerful analytical technique that enables label-free spatial localization and identification of molecules in complex samples.1-4 MSI applications range from forensics5 to clinical research6 and from understanding microbial communication7-8 to imaging biomolecules in tissues.1, 9-10 Recently, MSI protocols have been reviewed.11 Ambient ionization techniques enable direct analysis of complex samples under atmospheric pressure without special sample pretreatment.3, 12-16 In fact, in ambient ionization mass spectrometry, sample processing (e.g., extraction, dilution, preconcentration, or desorption) occurs during the analysis.17 This substantially speeds up analysis and eliminates any possible effects of sample preparation on the localization of moleculesmore » in the sample.3, 8, 12-14, 18-20 Venter and co-workers have classified ambient ionization techniques into three major categories based on the sample processing steps involved: 1) liquid extraction techniques, in which analyte molecules are removed from the sample and extracted into a solvent prior to ionization; 2) desorption techniques capable of generating free ions directly from substrates; and 3) desorption techniques that produce larger particles subsequently captured by an electrospray plume and ionized.17 This review focuses on localized analysis and ambient imaging of complex samples using a subset of ambient ionization methods broadly defined as “liquid extraction techniques” based on the classification introduced by Venter and co-workers.17 Specifically, we include techniques where analyte molecules are desorbed from solid or liquid samples using charged droplet bombardment, liquid extraction, physisorption, chemisorption, mechanical force, laser ablation, or laser capture microdissection. Analyte extraction is followed by soft ionization that generates ions corresponding to intact species. Some of the key advantages of liquid extraction techniques include the ease of operation, ability to analyze samples in their native environments, speed of analysis, and ability to tune the extraction solvent composition to a problem at hand. For example, solvent composition may be optimized for efficient extraction of different classes of analytes from the sample or for quantification or online derivatization through reactive analysis. In this review, we will: 1) introduce individual liquid extraction techniques capable of localized analysis and imaging, 2) describe approaches for quantitative MSI experiments free of matrix effects, 3) discuss advantages of reactive analysis for MSI experiments, and 4) highlight selected applications (published between 2012 and 2015) that focus on imaging and spatial profiling of molecules in complex biological and environmental samples.« less

  1. Multi-level adaptive finite element methods. 1: Variation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1979-01-01

    A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.

  2. MR arthrography in glenohumeral instability.

    PubMed

    Van der Woude, H J; Vanhoenacker, F M

    2007-01-01

    The impact of accurate imaging in the work-up of patients with glenohumeral instability is high. Results of imaging may directly influence the surgeon's strategy to perform an arthroscopic or open treatment for (recurrent) instability. Magnetic resonance (MR) imaging, and MR arthrography in particular, is the optimal technique to detect, localize and characterize injuries of the capsular-labrum complex. Besides TI-weighted sequences with fat suppression in axial, oblique sagital and coronal directions, an additional series in abduction and exoroation position is highly advocated. This ABER series optimally depicts abnormalities of the inferior capsular-labrum complex and partial undersurface tears of the spinatus tendons. Knowledge of different anatomical variants that may mimic labral tears and of variants of the classic Bankart lesion are useful in the analysis of shoulder MR arthrograms in patients with glenohumeral instability.

  3. River velocities from sequential multispectral remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Mied, Richard P.

    2013-06-01

    We address the problem of extracting surface velocities from a pair of multispectral remote sensing images over rivers using a new nonlinear multiple-tracer form of the global optimal solution (GOS). The derived velocity field is a valid solution across the image domain to the nonlinear system of equations obtained by minimizing a cost function inferred from the conservation constraint equations for multiple tracers. This is done by deriving an iteration equation for the velocity, based on the multiple-tracer displaced frame difference equations, and a local approximation to the velocity field. The number of velocity equations is greater than the number of velocity components, and thus overly constrain the solution. The iterative technique uses Gauss-Newton and Levenberg-Marquardt methods and our own algorithm of the progressive relaxation of the over-constraint. We demonstrate the nonlinear multiple-tracer GOS technique with sequential multispectral Landsat and ASTER images over a portion of the Potomac River in MD/VA, and derive a dense field of accurate velocity vectors. We compare the GOS river velocities with those from over 12 years of data at four NOAA reference stations, and find good agreement. We discuss how to find the appropriate spatial and temporal resolutions to allow optimization of the technique for specific rivers.

  4. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  5. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  6. Optimal resource states for local state discrimination

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael

    2018-02-01

    We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.

  7. Networked localization of sniper shots using acoustics

    NASA Astrophysics Data System (ADS)

    Hengy, S.; Hamery, P.; De Mezzo, S.; Duffner, P.

    2011-06-01

    The presence of snipers in modern conflicts leads to high insecurity for the soldiers. In order to improve the soldier's protection against this threat, the French German Research Institute of Saint-Louis (ISL) initiated studies in the domain of acoustic localization of shots. Mobile antennas mounted on the soldier's helmet were initially used for real-time detection, classification and localization of sniper shots. It showed good performances in land scenarios, but also in urban scenarios if the array was in the shot corridor, meaning that the microphones first detect the direct wave and then the reflections of the Mach and muzzle waves. As soon as the acoustic arrays were not near to the shot corridor (only reflections are detected) this solution lost its efficiency and erroneous estimated position were given. In order to estimate the position of the shooter in every kind of urban scenario, ISL started studying time reversal techniques. Knowing the position of every reflective object in the environment (buildings, walls, ...) it should be possible to estimate the position of the shooter. First, a synthetic propagation algorithm has been developed and validated for real scale applications. It has then been validated for small scale models, allowing us to test our time reversal based algorithms in our laboratory. In this paper we discuss all the challenges that are induced by the application of sniper detection using time reversal techniques. We will discuss all the hard points that can be encountered and try to find some solutions in order to optimize the use of this technique.

  8. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  9. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  10. Intelligent design optimization of a shape-memory-alloy-actuated reconfigurable wing

    NASA Astrophysics Data System (ADS)

    Lagoudas, Dimitris C.; Strelec, Justin K.; Yen, John; Khan, Mohammad A.

    2000-06-01

    The unique thermal and mechanical properties offered by shape memory alloys (SMAs) present exciting possibilities in the field of aerospace engineering. When properly trained, SMA wires act as linear actuators by contracting when heated and returning to their original shape when cooled. It has been shown experimentally that the overall shape of an airfoil can be altered by activating several attached SMA wire actuators. This shape-change can effectively increase the efficiency of a wing in flight at several different flow regimes. To determine the necessary placement of these wire actuators within the wing, an optimization method that incorporates a fully-coupled structural, thermal, and aerodynamic analysis has been utilized. Due to the complexity of the fully-coupled analysis, intelligent optimization methods such as genetic algorithms have been used to efficiently converge to an optimal solution. The genetic algorithm used in this case is a hybrid version with global search and optimization capabilities augmented by the simplex method as a local search technique. For the reconfigurable wing, each chromosome represents a realizable airfoil configuration and its genes are the SMA actuators, described by their location and maximum transformation strain. The genetic algorithm has been used to optimize this design problem to maximize the lift-to-drag ratio for a reconfigured airfoil shape.

  11. Optimization of Residual Stresses in MMC's through Process Parameter Control and the use of Heterogeneous Compensating/Compliant Interfacial Layers. OPTCOMP2 User's Guide

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.

    1996-01-01

    A user's guide for the computer program OPTCOMP2 is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in unidirectional metal matrix composites subjected to combined thermomechanical axisymmetric loading by altering the processing history, as well as through the microstructural design of interfacial fiber coatings. The user specifies the initial architecture of the composite and the load history, with the constituent materials being elastic, plastic, viscoplastic, or as defined by the 'user-defined' constitutive model, in addition to the objective function and constraints, through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the inelastic response of a fiber/interface layer(s)/matrix concentric cylinder model where the interface layers can be either homogeneous or heterogeneous. The response of heterogeneous layers is modeled using Aboudi's three-dimensional method of cells micromechanics model. The commercial optimization package DOT is used for the nonlinear optimization problem. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.

  12. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  13. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  14. Maintaining space in localized ridge augmentation using guided bone regeneration with tenting screw technology.

    PubMed

    Chasioti, Evdokia; Chiang, Tat Fai; Drew, Howard J

    2013-01-01

    Prosthetic guided implant surgery requires adequate ridge dimensions for proper implant placement. Various surgical procedures can be used to augment deficient alveolar ridges. Studies have examined new bone formation on deficient ridges, utilizing numerous surgical techniques and biomaterials. The goal is to develop time efficient techniques, which have low morbidity. A crucial factor for successful bone grafting procedures is space maintenance. The article discusses space maintenance tenting screws, used in conjunction with bone allografts and resorbable barrier membranes, to ensure uneventful guided bone regeneration (GBR) enabling optimal implant positioning. The technique utilized has been described in the literature to treat severely resorbed alveolar ridges and additionally can be considered in restoring the vertical and horizontal component of deficient extraction sites. Three cases are presented to illustrate the utilization and effectiveness of tenting screw technology in the treatment of atrophic extraction sockets and for deficient ridges.

  15. Stochastic Optical Reconstruction Microscopy (STORM).

    PubMed

    Xu, Jianquan; Ma, Hongqiang; Liu, Yang

    2017-07-05

    Super-resolution (SR) fluorescence microscopy, a class of optical microscopy techniques at a spatial resolution below the diffraction limit, has revolutionized the way we study biology, as recognized by the Nobel Prize in Chemistry in 2014. Stochastic optical reconstruction microscopy (STORM), a widely used SR technique, is based on the principle of single molecule localization. STORM routinely achieves a spatial resolution of 20 to 30 nm, a ten-fold improvement compared to conventional optical microscopy. Among all SR techniques, STORM offers a high spatial resolution with simple optical instrumentation and standard organic fluorescent dyes, but it is also prone to image artifacts and degraded image resolution due to improper sample preparation or imaging conditions. It requires careful optimization of all three aspects-sample preparation, image acquisition, and image reconstruction-to ensure a high-quality STORM image, which will be extensively discussed in this unit. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  16. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Optimizing Waveform Maximum Determination for Specular Point Tracking in Airborne GNSS-R.

    PubMed

    Motte, Erwan; Zribi, Mehrez

    2017-08-16

    Airborne GNSS-R campaigns are crucial to the understanding of signal interactions with the Earth's surface. As a consequence of the specific geometric configurations arising during measurements from aircraft, the reflected signals can be difficult to interpret under certain conditions like over strongly attenuating media such as forests, or when the reflected signal is contaminated by the direct signal. For these reasons, there are many cases where the reflectivity is overestimated, or a portion of the dataset has to be flagged as unusable. In this study we present techniques that have been developed to optimize the processing of airborne GNSS-R data, with the goal of improving its accuracy and robustness under non-optimal conditions. This approach is based on the detailed analysis of data produced by the instrument GLORI, which was recorded during an airborne campaign in the south west of France in June 2015. Our technique relies on the improved determination of reflected waveform peaks in the delay dimension, which is related to the loci of the signals contributed by the zone surrounding the specular point. It is shown that when developing techniques for the correct localization of waveform maxima under conditions of surfaces of low reflectivity, and/or contamination from the direct signal, it is possible to correct and extract values corresponding to the real reflectivity of the zone in the neighborhood of the specular point. This algorithm was applied to a reanalysis of the complete campaign dataset, following which the accuracy and sensitivity improved, and the usability of the dataset was improved by 30%.

  18. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  19. Automatic metro map layout using multicriteria optimization.

    PubMed

    Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G

    2011-01-01

    This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives. © 2011 IEEE Published by the IEEE Computer Society

  20. Design optimization of steel frames using an enhanced firefly algorithm

    NASA Astrophysics Data System (ADS)

    Carbas, Serdar

    2016-12-01

    Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.

  1. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  2. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  3. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  4. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  5. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  6. Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks.

    PubMed

    Mulder, Samuel A; Wunsch, Donald C

    2003-01-01

    The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.

  7. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  8. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  9. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  10. Upper Extremity Regional Anesthesia

    PubMed Central

    Neal, Joseph M.; Gerancher, J.C.; Hebl, James R.; Ilfeld, Brian M.; McCartney, Colin J.L.; Franco, Carlo D.; Hogan, Quinn H.

    2009-01-01

    Brachial plexus blockade is the cornerstone of the peripheral nerve regional anesthesia practice of most anesthesiologists. As part of the American Society of Regional Anesthesia and Pain Medicine’s commitment to providing intensive evidence-based education related to regional anesthesia and analgesia, this article is a complete update of our 2002 comprehensive review of upper extremity anesthesia. The text of the review focuses on (1) pertinent anatomy, (2) approaches to the brachial plexus and techniques that optimize block quality, (4) local anesthetic and adjuvant pharmacology, (5) complications, (6) perioperative issues, and (6) challenges for future research. PMID:19282714

  11. [Callose accumulation during treatment of tomato (Lycopersicon esculentum L.) cells with biotic elicitors].

    PubMed

    Emel'ianov, V I; Kravchuk, Zh N; Poliakovskiĭ, S A; Dmitriev, A P

    2008-01-01

    Time-course of induced accumulation of callose in tomato cells has been studied. Localization of callose in L. esculenthum cells was investigated by fluorescent microscopy technique, and the optimal time for its determination was found. Callose accumulation in tomato cells treated with different biotic elicitors was determined. Nonlinear dependence between callose accumulation and concentration of chitin oligomers (with 3-5 N-acetylglucosamine fragments) was established. Increasing of callose accumulation in tomato cells was proportional to the increase of concentration ofchitin dimer and chitosan in the culture medium.

  12. Dynamic mapping of brain and cognitive control of virtual gameplay (study by functional magnetic resonance imaging).

    PubMed

    Rezakova, M V; Mazhirina, K G; Pokrovskiy, M A; Savelov, A A; Savelova, O A; Shtark, M B

    2013-04-01

    Using functional magnetic resonance imaging technique, we performed online brain mapping of gamers, practiced to voluntary (cognitively) control their heart rate, the parameter that operated a competitive virtual gameplay in the adaptive feedback loop. With the default start picture, the regions of interest during the formation of optimal cognitive strategy were as follows: Brodmann areas 19, 37, 39 and 40, i.e. cerebellar structures (vermis, amygdala, pyramids, clivus). "Localization" concept of the contribution of the cerebellum to cognitive processes is discussed.

  13. Preconditioned upwind methods to solve 3-D incompressible Navier-Stokes equations for viscous flows

    NASA Technical Reports Server (NTRS)

    Hsu, C.-H.; Chen, Y.-M.; Liu, C. H.

    1990-01-01

    A computational method for calculating low-speed viscous flowfields is developed. The method uses the implicit upwind-relaxation finite-difference algorithm with a nonsingular eigensystem to solve the preconditioned, three-dimensional, incompressible Navier-Stokes equations in curvilinear coordinates. The technique of local time stepping is incorporated to accelerate the rate of convergence to a steady-state solution. An extensive study of optimizing the preconditioned system is carried out for two viscous flow problems. Computed results are compared with analytical solutions and experimental data.

  14. Combining density functional theory (DFT) and pair distribution function (PDF) analysis to solve the structure of metastable materials: the case of metakaolin.

    PubMed

    White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J

    2010-04-07

    Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.

  15. Magnetic resonance imaging for planning intracavitary brachytherapy for the treatment of locally advanced cervical cancer.

    PubMed

    Oñate Miranda, M; Pinho, D F; Wardak, Z; Albuquerque, K; Pedrosa, I

    2016-01-01

    Cervical cancer is the third most common gynecological cancer. Its treatment depends on tumor staging at the time of diagnosis, and a combination of chemotherapy and radiotherapy is the treatment of choice in locally advanced cervical cancers. The combined use of external beam radiotherapy and brachytherapy increases survival in these patients. Brachytherapy enables a larger dose of radiation to be delivered to the tumor with less toxicity for neighboring tissues with less toxicity for neighboring tissues compared to the use of external beam radiotherapy alone. For years, brachytherapy was planned exclusively using computed tomography (CT). The recent incorporation of magnetic resonance imaging (MRI) provides essential information about the tumor and neighboring structures making possible to better define the target volumes. Nevertheless, MRI has limitations, some of which can be compensated for by fusing CT and MRI. Fusing the images from the two techniques ensures optimal planning by combining the advantages of each technique. Copyright © 2015 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  16. Learning moment-based fast local binary descriptor

    NASA Astrophysics Data System (ADS)

    Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet

    2017-03-01

    Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.

  17. A study of palm biomass processing strategy in Sarawak

    NASA Astrophysics Data System (ADS)

    Lee, S. J. Y.; Ng, W. P. Q.; Law, K. H.

    2017-06-01

    In the past decades, palm industry is booming due to its profitable nature. An environmental concern regarding on the palm industry is the enormous amount of waste produced from palm industry. The waste produced or palm biomass is one significant renewable energy source and raw material for value-added products like fiber mats, activated carbon, dried fiber, bio-fertilizer and et cetera in Malaysia. There is a need to establish the palm biomass industry for the recovery of palm biomass for efficient utilization and waste reduction. The development of the industry is strongly depending on the two reasons, the availability and supply consistency of palm biomass as well as the availability of palm biomass processing facilities. In Malaysia, the development of palm biomass industry is lagging due to the lack of mature commercial technology and difficult logistic planning as a result of scattered locality of palm oil mill, where palm biomass is generated. Two main studies have been carried out in this research work: i) industrial study of the feasibility of decentralized and centralized palm biomass processing in Sarawak and ii) development of a systematic and optimized palm biomass processing planning for the development of palm biomass industry in Sarawak, Malaysia. Mathematical optimization technique is used in this work to model the above case scenario for biomass processing to achieve maximum economic potential and resource feasibility. An industrial study of palm biomass processing strategy in Sarawak has been carried out to evaluate the optimality of centralized processing and decentralize processing of the local biomass industry. An optimal biomass processing strategy is achieved.

  18. Novel technique for online characterization of cartilaginous tissue properties.

    PubMed

    Yuan, Tai-Yi; Huang, Chun-Yuh; Yong Gu, Wei

    2011-09-01

    The goal of tissue engineering is to use substitutes to repair and restore organ function. Bioreactors are an indispensable tool for monitoring and controlling the unique environment for engineered constructs to grow. However, in order to determine the biochemical properties of engineered constructs, samples need to be destroyed. In this study, we developed a novel technique to nondestructively online-characterize the water content and fixed charge density of cartilaginous tissues. A new technique was developed to determine the tissue mechano-electrochemical properties nondestructively. Bovine knee articular cartilage and lumbar annulus fibrosus were used in this study to demonstrate that this technique could be used on different types of tissue. The results show that our newly developed method is capable of precisely predicting the water volume fraction (less than 3% disparity) and fixed charge density (less than 16.7% disparity) within cartilaginous tissues. This novel technique will help to design a new generation of bioreactors which are able to actively determine the essential properties of the engineered constructs, as well as regulate the local environment to achieve the optimal conditions for cultivating constructs.

  19. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  20. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  1. Neoadjuvant radiotherapeutic strategies in pancreatic cancer

    PubMed Central

    Roeder, Falk

    2016-01-01

    This review summarizes the current status of neoadjuvant radiation approaches in the treatment of pancreatic cancer, including a description of modern radiation techniques, and an overview on the literature regarding neoadjuvant radio- or radiochemotherapeutic strategies both for resectable and irresectable pancreatic cancer. Neoadjuvant chemoradiation for locally-advanced, primarily non- or borderline resectable pancreas cancer results in secondary resectability in a substantial proportion of patients with consecutively markedly improved overall prognosis and should be considered as possible alternative in pretreatment multidisciplinary evaluations. In resectable pancreatic cancer, outstanding results in terms of response, local control and overall survival have been observed with neoadjuvant radio- or radiochemotherapy in several phase I/II trials, which justify further evaluation of this strategy. Further investigation of neoadjuvant chemoradiation strategies should be performed preferentially in randomized trials in order to improve comparability of the current results with other treatment modalities. This should include the evaluation of optimal sequencing with newer and more potent systemic induction therapy approaches. Advances in patient selection based on new molecular markers might be of crucial interest in this context. Finally modern external beam radiation techniques (intensity-modulated radiation therapy, image-guided radiation therapy and stereotactic body radiation therapy), new radiation qualities (protons, heavy ions) or combinations with alternative boosting techniques widen the therapeutic window and contribute to the reduction of toxicity. PMID:26909133

  2. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  3. An evaluation of methods for estimating the number of local optima in combinatorial optimization problems.

    PubMed

    Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A

    2013-01-01

    The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.

  4. Organizational Decision Making

    DTIC Science & Technology

    1975-08-01

    the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization

  5. Dual-energy contrast enhanced digital breast tomosynthesis: concept, method, and evaluation on phantoms

    NASA Astrophysics Data System (ADS)

    Puong, Sylvie; Patoureaux, Fanny; Iordache, Razvan; Bouchevreau, Xavier; Muller, Serge

    2007-03-01

    In this paper, we present the development of dual-energy Contrast-Enhanced Digital Breast Tomosynthesis (CEDBT). A method to produce background clutter-free slices from a set of low and high-energy projections is introduced, along with a scheme for the determination of the optimal low and high-energy techniques. Our approach consists of a dual-energy recombination of the projections, with an algorithm that has proven its performance in Contrast-Enhanced Digital Mammography1 (CEDM), followed by an iterative volume reconstruction. The aim is to eliminate the anatomical background clutter and to reconstruct slices where the gray level is proportional to the local iodine volumetric concentration. Optimization of the low and high-energy techniques is performed by minimizing the total glandular dose to reach a target iodine Signal Difference to Noise Ratio (SDNR) in the slices. In this study, we proved that this optimization could be done on the projections, by consideration of the SDNR in the projections instead of the SDNR in the slices, and verified this with phantom measurements. We also discuss some limitations of dual-energy CEDBT, due to the restricted angular range for the projection views, and to the presence of scattered radiation. Experiments on textured phantoms with iodine inserts were conducted to assess the performance of dual-energy CEDBT. Texture contrast was nearly completely removed and the iodine signal was enhanced in the slices.

  6. A global optimization algorithm for protein surface alignment

    PubMed Central

    2010-01-01

    Background A relevant problem in drug design is the comparison and recognition of protein binding sites. Binding sites recognition is generally based on geometry often combined with physico-chemical properties of the site since the conformation, size and chemical composition of the protein surface are all relevant for the interaction with a specific ligand. Several matching strategies have been designed for the recognition of protein-ligand binding sites and of protein-protein interfaces but the problem cannot be considered solved. Results In this paper we propose a new method for local structural alignment of protein surfaces based on continuous global optimization techniques. Given the three-dimensional structures of two proteins, the method finds the isometric transformation (rotation plus translation) that best superimposes active regions of two structures. We draw our inspiration from the well-known Iterative Closest Point (ICP) method for three-dimensional (3D) shapes registration. Our main contribution is in the adoption of a controlled random search as a more efficient global optimization approach along with a new dissimilarity measure. The reported computational experience and comparison show viability of the proposed approach. Conclusions Our method performs well to detect similarity in binding sites when this in fact exists. In the future we plan to do a more comprehensive evaluation of the method by considering large datasets of non-redundant proteins and applying a clustering technique to the results of all comparisons to classify binding sites. PMID:20920230

  7. Mathematical improvement of the Hopfield model for feasible solutions to the traveling salesman problem by a synapse dynamical system.

    PubMed

    Takahashi, Y

    1998-01-01

    It is well known that the Hopfield Model (HM) for neural networks to solve the Traveling Salesman Problem (TSP) suffers from three major drawbacks. (1) It can converge on nonoptimal locally minimum solutions. (2) It can converge on infeasible solutions. (3) Results are very sensitive to the careful tuning of its parameters. A number of methods have been proposed to overcome (a) well. In contrast, work on (b) and (c) has not been sufficient; techniques have not been generalized to more general optimization problems. Thus this paper mathematically resolves (b) and (c) to such an extent that the resolution can be applied to solving with some general network continuous optimization problems including the Hopfield version of the TSP. It first constructs an Extended HM (E-HM) that overcomes both (b) and (c). Fundamental techniques of the E-HM lie in the addition of a synapse dynamical system cooperated with the current HM unit dynamical system. It is this synapse dynamical system that makes the TSP constraint hold at any final states for whatever choices of the IIM parameters and an initial state. The paper then generalizes the E-HM further to a network that can solve a class of continuous optimization problems with a constraint equation where both of the objective function and the constraint function are nonnegative and continuously differentiable.

  8. New modalities of pain treatment after outpatient orthopaedic surgery.

    PubMed

    Beaussier, M; Sciard, D; Sautet, A

    2016-02-01

    Postoperative pain relief is one of the cornerstones of success of orthopaedic surgery. Development of new minimally-invasive surgical procedures, as well as improvements in pharmacological and local and regional techniques should result in optimal postoperative pain control for all patients. The analgesic strategy has to be efficient, with minimal side effects, and be easy to manage at home. Multimodal analgesia allows for a reduction of opiate use and thereby its side effects. Local and regional analgesia is a major component of this multimodal strategy, associated with optimal pain relief, even upon mobilization, and it has beneficial effects on postoperative recovery. Ultrasound guidance improves the success rate of distal nerve blocks and makes distal selective blockade possible, helping to preserve the limb's motility. Besides peripheral nerve blocks, local infiltration (incisional and/or intra-articular) is also important to consider. Duration of the nerve blockade is limited after a single injection. This must be taken into consideration to avoid the recurrence of pain when the patient returns home. Continuous perineural blocks using catheters are an option that can be easily managed at home with monitoring by home-care nurses. Extended-release liposomal bupivacaine and adjuvants such as dexamethasone could significantly enhance the duration of the sensory block, thereby reducing the indications for pain pumps. Non-pharmacological approaches, such as cryotherapy, hypnosis and acupuncture should not be ignored. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  9. GRA prospectus: optimizing design and management of protected areas

    USGS Publications Warehouse

    Bernknopf, Richard; Halsing, David

    2001-01-01

    Protected areas comprise one major type of global conservation effort that has been in the form of parks, easements, or conservation concessions. Though protected areas are increasing in number and size throughout tropical ecosystems, there is no systematic method for optimally targeting specific local areas for protection, designing the protected area, and monitoring it, or for guiding follow-up actions to manage it or its surroundings over the long run. Without such a system, conservation projects often cost more than necessary and/or risk protecting ecosystems and biodiversity less efficiently than desired. Correcting these failures requires tools and strategies for improving the placement, design, and long-term management of protected areas. The objective of this project is to develop a set of spatially based analytical tools to improve the selection, design, and management of protected areas. In this project, several conservation concessions will be compared using an economic optimization technique. The forest land use portfolio model is an integrated assessment that measures investment in different land uses in a forest. The case studies of individual tropical ecosystems are developed as forest (land) use and preservation portfolios in a geographic information system (GIS). Conservation concessions involve a private organization purchasing development and resource access rights in a certain area and retiring them. Forests are put into conservation, and those people who would otherwise have benefited from extracting resources or selling the right to do so are compensated. Concessions are legal agreements wherein the exact amount and nature of the compensation result from a negotiated agreement between an agent of the conservation community and the local community. Funds are placed in a trust fund, and annual payments are made to local communities and regional/national governments. The payments are made pending third-party verification that the forest expanse and quality have been maintained.

  10. Integration of Functional Magnetic Resonance Imaging and Magnetoencephalography Functional Maps Into a CyberKnife Planning System: Feasibility Study for Motor Activity Localization and Dose Planning.

    PubMed

    De Martin, Elena; Duran, Dunja; Ghielmetti, Francesco; Visani, Elisa; Aquino, Domenico; Marchetti, Marcello; Sebastiano, Davide Rossi; Cusumano, Davide; Bruzzone, Maria Grazia; Panzica, Ferruccio; Fariselli, Laura

    2017-12-01

    Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) provide noninvasive localization of eloquent brain areas for presurgical planning. The aim of this study is the integration of MEG and fMRI maps into a CyberKnife (CK) system to optimize dose planning. Four patients with brain metastases in the motor area underwent functional imaging study of the hand motor cortex before radiosurgery. MEG data were acquired during a visually cued hand motor task. Motor activations were identified also using an fMRI block-designed paradigm. MEG and fMRI maps were then integrated into a CK system and contoured as organs at risk for treatment planning optimization. The integration of fMRI data into the CK system was achieved for all patients by means of a standardized protocol. We also implemented an ad hoc pipeline to convert the MEG signal into a DICOM standard, to make sure that it was readable by our CK treatment planning system. Inclusion of the activation areas into the optimization plan allowed the creation of treatment plans that reduced the irradiation of the motor cortex yet not affecting the brain peripheral dose. The availability of advanced neuroimaging techniques is playing an increasingly important role in radiosurgical planning strategy. We successfully imported MEG and fMRI activations into a CK system. This additional information can improve dose sparing of eloquent areas, allowing a more comprehensive investigation of the related dose-volume constraints that in theory could translate into a gain in tumor local control, and a reduction of neurological complications. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings

    NASA Astrophysics Data System (ADS)

    Mader, Charles Alexander

    A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.

  12. Subcritical transition scenarios via linear and nonlinear localized optimal perturbations in plane Poiseuille flow

    NASA Astrophysics Data System (ADS)

    Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro

    2016-12-01

    Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases of streak saturation, providing a contemporary growth of all of the velocity components due to strong nonlinear coupling.

  13. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Susceptibility-weighted imaging using inter-echo-variance channel combination for improved contrast at 7 tesla.

    PubMed

    Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria

    2017-04-01

    To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.

  15. BMP analysis system for watershed-based stormwater management.

    PubMed

    Zhen, Jenny; Shoemaker, Leslie; Riverson, John; Alvi, Khalid; Cheng, Mow-Soung

    2006-01-01

    Best Management Practices (BMPs) are measures for mitigating nonpoint source (NPS) pollution caused mainly by stormwater runoff. Established urban and newly developing areas must develop cost effective means for restoring or minimizing impacts, and planning future growth. Prince George's County in Maryland, USA, a fast-growing region in the Washington, DC metropolitan area, has developed a number of tools to support analysis and decision making for stormwater management planning and design at the watershed level. These tools support watershed analysis, innovative BMPs, and optimization. Application of these tools can help achieve environmental goals and lead to significant cost savings. This project includes software development that utilizes GIS information and technology, integrates BMP processes simulation models, and applies system optimization techniques for BMP planning and selection. The system employs the ESRI ArcGIS as the platform, and provides GIS-based visualization and support for developing networks including sequences of land uses, BMPs, and stream reaches. The system also provides interfaces for BMP placement, BMP attribute data input, and decision optimization management. The system includes a stand-alone BMP simulation and evaluation module, which complements both research and regulatory nonpoint source control assessment efforts, and allows flexibility in the examining various BMP design alternatives. Process based simulation of BMPs provides a technique that is sensitive to local climate and rainfall patterns. The system incorporates a meta-heuristic optimization technique to find the most cost-effective BMP placement and implementation plan given a control target, or a fixed cost. A case study is presented to demonstrate the application of the Prince George's County system. The case study involves a highly urbanized area in the Anacostia River (a tributary to Potomac River) watershed southeast of Washington, DC. An innovative system of management practices is proposed to minimize runoff, improve water quality, and provide water reuse opportunities. Proposed management techniques include bioretention, green roof, and rooftop runoff collection (rain barrel) systems. The modeling system was used to identify the most cost-effective combinations of management practices to help minimize frequency and size of runoff events and resulting combined sewer overflows to the Anacostia River.

  16. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  17. Optimization Models for Scheduling of Jobs

    PubMed Central

    Indika, S. H. Sathish; Shier, Douglas R.

    2006-01-01

    This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions. PMID:27274921

  18. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  19. Modeling Photo-Bleaching Kinetics to Create High Resolution Maps of Rod Rhodopsin in the Human Retina

    PubMed Central

    Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.

    2015-01-01

    We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogane, S.; Shikama, T., E-mail: shikama@me.kyoto-u.ac.jp; Hasuo, M.

    In magnetically confined torus plasmas, the local emission intensity, temperature, and flow velocity of atoms in the inboard and outboard scrape-off layers can be separately measured by a passive emission spectroscopy assisted by observation of the Zeeman splitting in their spectral line shape. To utilize this technique, a near-infrared interference spectrometer optimized for the observation of the helium 2{sup 3}S–2{sup 3}P transition spectral line (wavelength 1083 nm) has been developed. The applicability of the technique to actual torus devices is elucidated by calculating the spectral line shapes expected to be observed in LHD and QUEST (Q-shu University Experiment with Steadymore » State Spherical Tokamak). In addition, the Zeeman effect on the spectral line shape is measured using a glow-discharge tube installed in a superconducting magnet.« less

  1. Analysis of the accelerated crucible rotation technique applied to the gradient freeze growth of cadmium zinc telluride

    NASA Astrophysics Data System (ADS)

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-06-01

    We employ finite-element modeling to assess the effects of the accelerated crucible rotation technique (ACRT) on cadmium zinc telluride (CZT) crystals grown from a gradient freeze system. Via consideration of tellurium segregation and transport, we show, for the first time, that steady growth from a tellurium-rich melt produces persistent undercooling in front of the growth interface, likely leading to morphological instability. The application of ACRT rearranges melt flows and tellurium transport but, in contrast to conventional wisdom, does not altogether eliminate undercooling of the melt. Rather, a much more complicated picture arises, where spatio-temporal realignment of undercooled melt may act to locally suppress instability. A better understanding of these mechanisms and quantification of their overall effects will allow for future growth optimization.

  2. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  3. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  4. Counseling about turbuhaler technique: needs assessment and effective strategies for community pharmacists.

    PubMed

    Basheti, Iman A; Reddel, Helen K; Armour, Carol L; Bosnic-Anticevich, Sinthia Z

    2005-05-01

    Optimal effects of asthma medications are dependent on correct inhaler technique. In a telephone survey, 77/87 patients reported that their Turbuhaler technique had not been checked by a health care professional. In a subsequent pilot study, 26 patients were randomized to receive one of 3 Turbuhaler counseling techniques, administered in the community pharmacy. Turbuhaler technique was scored before and 2 weeks after counseling (optimal technique = score 9/9). At baseline, 0/26 patients had optimal technique. After 2 weeks, optimal technique was achieved by 0/7 patients receiving standard verbal counseling (A), 2/8 receiving verbal counseling augmented with emphasis on Turbuhaler position during priming (B), and 7/9 receiving augmented verbal counseling plus physical demonstration (C) (Fisher's exact test for A vs C, p = 0.006). Satisfactory technique (4 essential steps correct) also improved (A: 3/8 to 4/7; B: 2/9 to 5/8; and C: 1/9 to 9/9 patients) (A vs C, p = 0.1). Counseling in Turbuhaler use represents an important opportunity for community pharmacists to improve asthma management, but physical demonstration appears to be an important component to effective Turbuhaler training for educating patients toward optimal Turbuhaler technique.

  5. A study of optimization techniques in HDR brachytherapy for the prostate

    NASA Astrophysics Data System (ADS)

    Pokharel, Ghana Shyam

    Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.

  6. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulatedmore » proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented into routine clinical practice.« less

  7. Fast live cell imaging at nanometer scale using annihilating filter-based low-rank Hankel matrix approach

    NASA Astrophysics Data System (ADS)

    Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2015-09-01

    Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.

  8. Optimizing Imaging Conditions for Demanding Multi-Color Super Resolution Localization Microscopy

    PubMed Central

    Nahidiazar, Leila; Agronskaia, Alexandra V.; Broertjes, Jorrit; van den Broek, Bram; Jalink, Kees

    2016-01-01

    Single Molecule Localization super-resolution Microscopy (SMLM) has become a powerful tool to study cellular architecture at the nanometer scale. In SMLM, single fluorophore labels are made to repeatedly switch on and off (“blink”), and their exact locations are determined by mathematically finding the centers of individual blinks. The image quality obtainable by SMLM critically depends on efficacy of blinking (brightness, fraction of molecules in the on-state) and on preparation longevity and labeling density. Recent work has identified several combinations of bright dyes and imaging buffers that work well together. Unfortunately, different dyes blink optimally in different imaging buffers, and acquisition of good quality 2- and 3-color images has therefore remained challenging. In this study we describe a new imaging buffer, OxEA, that supports 3-color imaging of the popular Alexa dyes. We also describe incremental improvements in preparation technique that significantly decrease lateral- and axial drift, as well as increase preparation longevity. We show that these improvements allow us to collect very large series of images from the same cell, enabling image stitching, extended 3D imaging as well as multi-color recording. PMID:27391487

  9. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  10. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  11. OCT despeckling via weighted nuclear norm constrained non-local low-rank representation

    NASA Astrophysics Data System (ADS)

    Tang, Chang; Zheng, Xiao; Cao, Lijuan

    2017-10-01

    As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.

  12. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-05-21

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.

  13. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-01-01

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410

  14. Validity of the Catapult ClearSky T6 Local Positioning System for Team Sports Specific Drills, in Indoor Conditions

    PubMed Central

    Luteberget, Live S.; Spencer, Matt; Gilgien, Matthias

    2018-01-01

    Aim: The aim of the present study was to determine the validity of position, distance traveled and instantaneous speed of team sport players as measured by a commercially available local positioning system (LPS) during indoor use. In addition, the study investigated how the placement of the field of play relative to the anchor nodes and walls of the building affected the validity of the system. Method: The LPS (Catapult ClearSky T6, Catapult Sports, Australia) and the reference system [Qualisys Oqus, Qualisys AB, Sweden, (infra-red camera system)] were installed around the field of play to capture the athletes' motion. Athletes completed five tasks, all designed to imitate team-sports movements. The same protocol was completed in two sessions, one with an assumed optimal geometrical setup of the LPS (optimal condition), and once with a sub-optimal geometrical setup of the LPS (sub-optimal condition). Raw two-dimensional position data were extracted from both the LPS and the reference system for accuracy assessment. Position, distance and speed were compared. Results: The mean difference between the LPS and reference system for all position estimations was 0.21 ± 0.13 m (n = 30,166) in the optimal setup, and 1.79 ± 7.61 m (n = 22,799) in the sub-optimal setup. The average difference in distance was below 2% for all tasks in the optimal condition, while it was below 30% in the sub-optimal condition. Instantaneous speed showed the largest differences between the LPS and reference system of all variables, both in the optimal (≥35%) and sub-optimal condition (≥74%). The differences between the LPS and reference system in instantaneous speed were speed dependent, showing increased differences with increasing speed. Discussion: Measures of position, distance, and average speed from the LPS show low errors, and can be used confidently in time-motion analyses for indoor team sports. The calculation of instantaneous speed from LPS raw data is not valid. To enhance instantaneous speed calculation the application of appropriate filtering techniques to enhance the validity of such data should be investigated. For all measures, the placement of anchor nodes and the field of play relative to the walls of the building influence LPS output to a large degree. PMID:29670530

  15. Design optimization of the sensor spatial arrangement in a direct magnetic field-based localization system for medical applications.

    PubMed

    Marechal, Luc; Shaohui Foong; Zhenglong Sun; Wood, Kristin L

    2015-08-01

    Motivated by the need for developing a neuronavigation system to improve efficacy of intracranial surgical procedures, a localization system using passive magnetic fields for real-time monitoring of the insertion process of an external ventricular drain (EVD) catheter is conceived and developed. This system operates on the principle of measuring the static magnetic field of a magnetic marker using an array of magnetic sensors. An artificial neural network (ANN) is directly used for solving the inverse problem of magnetic dipole localization for improved efficiency and precision. As the accuracy of localization system is highly dependent on the sensor spatial location, an optimization framework, based on understanding and classification of experimental sensor characteristics as well as prior knowledge of the general trajectory of the localization pathway, for design of such sensing assemblies is described and investigated in this paper. Both optimized and non-optimized sensor configurations were experimentally evaluated and results show superior performance from the optimized configuration. While the approach presented here utilizes ventriculostomy as an illustrative platform, it can be extended to other medical applications that require localization inside the body.

  16. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  17. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  18. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  19. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  20. Optimization of freeform surfaces using intelligent deformation techniques for LED applications

    NASA Astrophysics Data System (ADS)

    Isaac, Annie Shalom; Neumann, Cornelius

    2018-04-01

    For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.

  1. Pneumothorax detection in chest radiographs using local and global texture signatures

    NASA Astrophysics Data System (ADS)

    Geva, Ofer; Zimmerman-Moreno, Gali; Lieberman, Sivan; Konen, Eli; Greenspan, Hayit

    2015-03-01

    A novel framework for automatic detection of pneumothorax abnormality in chest radiographs is presented. The suggested method is based on a texture analysis approach combined with supervised learning techniques. The proposed framework consists of two main steps: at first, a texture analysis process is performed for detection of local abnormalities. Labeled image patches are extracted in the texture analysis procedure following which local analysis values are incorporated into a novel global image representation. The global representation is used for training and detection of the abnormality at the image level. The presented global representation is designed based on the distinctive shape of the lung, taking into account the characteristics of typical pneumothorax abnormalities. A supervised learning process was performed on both the local and global data, leading to trained detection system. The system was tested on a dataset of 108 upright chest radiographs. Several state of the art texture feature sets were experimented with (Local Binary Patterns, Maximum Response filters). The optimal configuration yielded sensitivity of 81% with specificity of 87%. The results of the evaluation are promising, establishing the current framework as a basis for additional improvements and extensions.

  2. Geometry-based ensembles: toward a structural characterization of the classification boundary.

    PubMed

    Pujol, Oriol; Masip, David

    2009-06-01

    This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.

  3. Path optimization with limited sensing ability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Sung Ha, E-mail: kang@math.gatech.edu; Kim, Seong Jun, E-mail: skim396@math.gatech.edu; Zhou, Haomin, E-mail: hmzhou@math.gatech.edu

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducingmore » its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.« less

  4. Living colors in the gray mold pathogen Botrytis cinerea: codon-optimized genes encoding green fluorescent protein and mCherry, which exhibit bright fluorescence.

    PubMed

    Leroch, Michaela; Mernke, Dennis; Koppenhoefer, Dieter; Schneider, Prisca; Mosbach, Andreas; Doehlemann, Gunther; Hahn, Matthias

    2011-05-01

    The green fluorescent protein (GFP) and its variants have been widely used in modern biology as reporters that allow a variety of live-cell imaging techniques. So far, GFP has rarely been used in the gray mold fungus Botrytis cinerea because of low fluorescence intensity. The codon usage of B. cinerea genes strongly deviates from that of commonly used GFP-encoding genes and reveals a lower GC content than other fungi. In this study, we report the development and use of a codon-optimized version of the B. cinerea enhanced GFP (eGFP)-encoding gene (Bcgfp) for improved expression in B. cinerea. Both the codon optimization and, to a smaller extent, the insertion of an intron resulted in higher mRNA levels and increased fluorescence. Bcgfp was used for localization of nuclei in germinating spores and for visualizing host penetration. We further demonstrate the use of promoter-Bcgfp fusions for quantitative evaluation of various toxic compounds as inducers of the atrB gene encoding an ABC-type drug efflux transporter of B. cinerea. In addition, a codon-optimized mCherry-encoding gene was constructed which yielded bright red fluorescence in B. cinerea.

  5. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  6. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  7. The management of lower gastrointestinal bleeding.

    PubMed

    Marion, Y; Lebreton, G; Le Pennec, V; Hourna, E; Viennot, S; Alves, A

    2014-06-01

    Lower gastrointestinal (LGI) bleeding is generally less severe than upper gastrointestinal (UGI) bleeding with spontaneous cessation of bleeding in 80% of cases and a mortality of 2-4%. However, unlike UGI bleeding, there is no consensual agreement about management. Once the patient has been stabilized, the main objective and greatest difficulty is to identify the location of bleeding in order to provide specific appropriate treatment. While upper endoscopy and colonoscopy remain the essential first-line examinations, the development and availability of angiography have made this an important imaging modality for cases of active bleeding; they allow diagnostic localization of bleeding and guide subsequent therapy, whether therapeutic embolization, interventional colonoscopy or, if other techniques fail or are unavailable, surgery directed at the precise site of bleeding. Furthermore, newly developed endoscopic techniques, particularly video capsule enteroscopy, now allow minimally invasive exploration of the small intestine; if this is positive, it will guide subsequent assisted enteroscopy or surgery. Other small bowel imaging techniques include enteroclysis by CT or magnetic resonance imaging. At the present time, exploratory surgery is no longer a first-line approach. In view of the lesser gravity of LGI bleeding, it is most reasonable to simply stabilize the patient initially for subsequent transfer to a specialized center, if minimally invasive techniques are not available at the local hospital. In all cases, the complexity and diversity of LGI bleeding require a multidisciplinary collaboration involving the gastroenterologist, radiologist, intensivist and surgeon to optimize diagnosis and treatment of the patient. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  8. Selective RF pulses in NMR and their effect on coupled and uncoupled spin systems

    NASA Astrophysics Data System (ADS)

    Slotboom, J.

    1993-10-01

    This thesis describes various aspects of the usage of shaped RF-pulses for volume selection and spectral editing. Contents: Introduction--The History of Magnetic Resonance in a Nutshell, and The Usage of RF Pulses in Contemporary MRS and MRI; Theoretical and Practical Aspects of Localized NMR Spectroscopy; The Effects of RF Pulse Shape Discretization on the Spatially Selective Performance; Design of Frequency-Selective RF Pulses by Optimizing a Small Number of Pulse Parameters; A Single-Shot Localization Pulse Sequence Suited for Coils with Inhomogeneous RF Fields Using Adiabatic Slice-Selective RF Pulses; The Bloch Equations for an AB System and the Design of Spin State Selective RF Pulses for Coupled Spin Systems; The Effects of Frequency Selective RF Pulses on J Coupled Spin-1/2 Systems; A Quantitative (1)H MRS in vivo Study of the Effects of L-Ornithine-L-Aspartate on the Development of Mild Encephalopathy Using a Single Shot Localization Technique Based on SAR Reduced Adiabatic 2(pi) Pulses.

  9. Single shot trajectory design for region-specific imaging using linear and nonlinear magnetic encoding fields.

    PubMed

    Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2013-09-01

    It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.

  10. Video Completion in Digital Stabilization Task Using Pseudo-Panoramic Technique

    NASA Astrophysics Data System (ADS)

    Favorskaya, M. N.; Buryachenko, V. V.; Zotin, A. G.; Pakhirka, A. I.

    2017-05-01

    Video completion is a necessary stage after stabilization of a non-stationary video sequence, if it is desirable to make the resolution of the stabilized frames equalled the resolution of the original frames. Usually the cropped stabilized frames lose 10-20% of area that means the worse visibility of the reconstructed scenes. The extension of a view of field may appear due to the pan-tilt-zoom unwanted camera movement. Our approach deals with a preparing of pseudo-panoramic key frame during a stabilization stage as a pre-processing step for the following inpainting. It is based on a multi-layered representation of each frame including the background and objects, moving differently. The proposed algorithm involves four steps, such as the background completion, local motion inpainting, local warping, and seamless blending. Our experiments show that a necessity of a seamless stitching occurs often than a local warping step. Therefore, a seamless blending was investigated in details including four main categories, such as feathering-based, pyramid-based, gradient-based, and optimal seam-based blending.

  11. SU-E-T-255: Optimized Supine Craniospinal Irradiation with Image-Guided and Field Matched Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Z; Holupka, E; Naughton, J

    2014-06-01

    Purpose: Conventional craniospinal irradiation (CSI) challenges include dose inhomogeneity at field junctions and position uncertainty due to the field divergence, particular for the two spinal fields. Here we outline a new supine CSI technique to address these difficulties. Methods: Patient was simulated in supine position. The cranial fields had isocenter at C2/C3 vertebral and were matched with 1st spinal field. Their inferior border was chosen to avoid the shoulder, as well as chin from the 1st spine field. Their collimator angles were dependent on asymmetry jaw setting of the 1st spinal field. With couch rotation, the spinal field gantry anglesmore » were adjusted to ensure, the inferior border of 1st and superior border of 2nd spinal fields were perpendicular to the table top. The radio-opaque wire position for the spinal junction was located initially by the light field from an anterior setup beam, and was finalized by the portal imaging of the 1st spinal field. With reference to the spinal junction wire, the fields were matched by positioning the isocenter of the 2nd spinal field. A formula was derived to optimize supine CSI treatment planning, by utilizing the relationship among the Yjaw setting, the spinal field gantry angles, cranial field collimator angles, and the spinal field isocenters location. The plan was delivered with portal imaging alignment for the both cranial and spinal junctions. Results: Utilizing this technique with matching beams, and conventional technique such as feathering and forwarding planning, a homogenous dose distribution was achieved throughout the entire CSI treatment volume including the spinal junction. Placing the spinal junction wire visualized in both spinal portals, allows for precise determination and verification of the appropriate match line of the spine fields. Conclusion: This technique of optimization supine CSI achieved a homogenous dose distributions and patient localization accuracy with image-guided and matched beams.« less

  12. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  13. An approach to unbiased subsample interpolation for motion tracking.

    PubMed

    McCormick, Matthew M; Varghese, Tomy

    2013-04-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.

  14. Scar modification. Techniques for revision and camouflage.

    PubMed

    Horswell, B B

    1998-09-01

    The surgery and management of scars is a protracted and staged process that includes preparation of the skin through hygienic measures, scar softening (if indicated) with steroids, massage and pressure dressings, skilled execution of the surgical plan, and thorough postoperative wound care. This process generally covers a 1-year period for the various stages mentioned. Many general host and local skin factors will directly affect the final revision result. The two most important indirect factors that the surgeon must endeavor to control are optimal patient preparation and cutaneous health, and patient compliance with, and an ability to carry out, those wound care measures that the surgeon prescribes. Keloid and burn contracture scars represent two entities that are complicated and challenging to treat owing to their abnormal morphophysiologic features. Management of these scars is prolonged, and the patient must understand that the ultimate result will usually be a compromise. New grafting techniques, such as cultured autodermal grafts, offer improved initial management of burn wounds that may subsequently optimize scar revision in these patients. Keloids, and to a lesser extent hypertrophic scars, require steroid injections, pressure treatment, careful surgery, and protracted wound support and pressure treatment (exceeding 6 months) after surgery.

  15. Experimental study on the healing process following laser welding of the cornea.

    PubMed

    Rossi, Francesca; Pini, Roberto; Menabuoni, Luca; Mencucci, Rita; Menchini, Ugo; Ambrosini, Stefano; Vannelli, Gabriella

    2005-01-01

    An experimental study evaluating the application of laser welding of the cornea and the subsequent healing process is presented. The welding of corneal wounds is achieved after staining the cut walls with a solution of the chromophore indocyanine green, and irradiating them with a diode laser (810 nm) operating at low power (60 to 90 mW). The result is a localized heating of the cut, inducing controlled welding of the stromal collagen. In order to optimize this technique and to study the healing process, experimental tests, simulating cataract surgery and penetrating keratoplasty, were performed on rabbits: conventional and laser-induced suturing of corneal wounds were thus compared. A follow-up study 7 to 90 days after surgery was carried out by means of objective and histological examinations, in order to optimize the welding technique and to investigate the subsequent healing process. The analyses of the laser-welded corneas evidenced a faster and more effective restoration of the architecture of the stroma. No thermal damage of the welded stroma was detected, nor were there foreign body reactions or other inflammatory processes. Copyright 2005 Society of Photo-Optical Instrumentation Engineers.

  16. The use of a molecular technique for the detection of porcine ingredients in the Malaysian food market.

    PubMed

    Farouk, Abd-ElAziem; Batcha, Mohamed F; Greiner, Ralf; Salleh, Hamzah M; Salleh, Mohamad R; Sirajudin, Abdur R

    2006-09-01

    To develop a molecular technique that is fast and reliable in detecting porcine contamination or ingredients in foods. The method applied involved DNA amplification using polymerase chain reaction (PCR) technology. Thus, the sequence of a certain gene found uniquely in pork was identified and its sequence was used to design specific primers for the PCR. The extraction of DNA was optimized in respect to PCR and detection limits were established. The optimized method was then used to identify pork in food products obtained from various local hypermarkets. The latest results were confirmed in triplicates on the 20th April 2006 at the Molecular Biology Laboratory, International Islamic University, Malaysia. The method was shown to be robust and reliable. Out of 30 food samples not expected to contain pork material, 3 samples were shown to be contaminated with pork material; 2 chocolates and one chicken nugget. We observed that 2 food products that were labeled as halal showed positive for porcine ingredients, while another that did not have any halal logo but originated from outside Malaysia and exported to many Middle Eastern nations also showed positive.

  17. How to mathematically optimize drug regimens using optimal control.

    PubMed

    Moore, Helen

    2018-02-01

    This article gives an overview of a technique called optimal control, which is used to optimize real-world quantities represented by mathematical models. I include background information about the historical development of the technique and applications in a variety of fields. The main focus here is the application to diseases and therapies, particularly the optimization of combination therapies, and I highlight several such examples. I also describe the basic theory of optimal control, and illustrate each of the steps with an example that optimizes the doses in a combination regimen for leukemia. References are provided for more complex cases. The article is aimed at modelers working in drug development, who have not used optimal control previously. My goal is to make this technique more accessible in the biopharma community.

  18. Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.

    1996-01-01

    A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.

  19. Three-dimensional unstructured grid generation via incremental insertion and local optimization

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Wiltberger, N. Lyn; Gandhi, Amar S.

    1992-01-01

    Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details.

  20. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  1. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  2. Advanced Intelligent System Application to Load Forecasting and Control for Hybrid Electric Bus

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Elfayoumy, Mahmoud

    1996-01-01

    The primary motivation for this research emanates from providing a decision support system to the electric bus operators in the municipal and urban localities which will guide the operators to maintain an optimal compromise among the noise level, pollution level, fuel usage etc. This study is backed up by our previous studies on study of battery characteristics, permanent magnet DC motor studies and electric traction motor size studies completed in the first year. The operator of the Hybrid Electric Car must determine optimal power management schedule to meet a given load demand for different weather and road conditions. The decision support system for the bus operator comprises three sub-tasks viz. forecast of the electrical load for the route to be traversed divided into specified time periods (few minutes); deriving an optimal 'plan' or 'preschedule' based on the load forecast for the entire time-horizon (i.e., for all time periods) ahead of time; and finally employing corrective control action to monitor and modify the optimal plan in real-time. A fully connected artificial neural network (ANN) model is developed for forecasting the kW requirement for hybrid electric bus based on inputs like climatic conditions, passenger load, road inclination, etc. The ANN model is trained using back-propagation algorithm employing improved optimization techniques like projected Lagrangian technique. The pre-scheduler is based on a Goal-Programming (GP) optimization model with noise, pollution and fuel usage as the three objectives. GP has the capability of analyzing the trade-off among the conflicting objectives and arriving at the optimal activity levels, e.g., throttle settings. The corrective control action or the third sub-task is formulated as an optimal control model with inputs from the real-time data base as well as the GP model to minimize the error (or deviation) from the optimal plan. These three activities linked with the ANN forecaster proving the output to the GP model which in turn produces the pre-schedule of the optimal control model. Some preliminary results based on a hypothetical test case will be presented for the load forecasting module. The computer codes for the three modules will be made available fe adoption by bus operating agencies. Sample results will be provided using these models. The software will be a useful tool for supporting the control systems for the Electric Bus project of NASA.

  3. Simulation to Support Local Search in Trajectory Optimization Planning

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Venable, K. Brent; Lindsey, James

    2012-01-01

    NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.

  4. Time-domain reflectance diffuse optical tomography with Mellin-Laplace transform for experimental detection and depth localization of a single absorbing inclusion

    PubMed Central

    Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc

    2013-01-01

    We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292

  5. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    PubMed Central

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020

  6. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    PubMed

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  7. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  8. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    PubMed Central

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2017-01-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  9. Muscle optimization techniques impact the magnitude of calculated hip joint contact forces.

    PubMed

    Wesseling, Mariska; Derikx, Loes C; de Groote, Friedl; Bartels, Ward; Meyer, Christophe; Verdonschot, Nico; Jonkers, Ilse

    2015-03-01

    In musculoskeletal modelling, several optimization techniques are used to calculate muscle forces, which strongly influence resultant hip contact forces (HCF). The goal of this study was to calculate muscle forces using four different optimization techniques, i.e., two different static optimization techniques, computed muscle control (CMC) and the physiological inverse approach (PIA). We investigated their subsequent effects on HCFs during gait and sit to stand and found that at the first peak in gait at 15-20% of the gait cycle, CMC calculated the highest HCFs (median 3.9 times peak GRF (pGRF)). When comparing calculated HCFs to experimental HCFs reported in literature, the former were up to 238% larger. Both static optimization techniques produced lower HCFs (median 3.0 and 3.1 pGRF), while PIA included muscle dynamics without an excessive increase in HCF (median 3.2 pGRF). The increased HCFs in CMC were potentially caused by higher muscle forces resulting from co-contraction of agonists and antagonists around the hip. Alternatively, these higher HCFs may be caused by the slightly poorer tracking of the net joint moment by the muscle moments calculated by CMC. We conclude that the use of different optimization techniques affects calculated HCFs, and static optimization approached experimental values best. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  10. Bridging the Information Gap: Remote Sensing and Micro Hydropower Feasibility in Data-Scarce Regions

    NASA Astrophysics Data System (ADS)

    Muller, Marc Francois

    Access to electricity remains an impediment to development in many parts of the world, particularly in rural areas with low population densities and prohibitive grid extension costs. In that context, community-scale run-of-river hydropower---micro-hydropower---is an attractive local power generation option, particularly in mountainous regions, where appropriate slope and runoff conditions occur. Despite their promise, micro hydropower programs have generally failed to have a significant impact on rural electrification in developing nations. In Nepal, despite very favorable conditions and approximately 50 years of experience, the technology supplies only 4% of the 10 million households that do not have access to the central electricity grid. These poor results point towards a major information gap between technical experts, who may lack the incentives or local knowledge needed to design appropriate systems for rural villages, and local users, who have excellent knowledge of the community but lack technical expertise to design and manage infrastructure. Both groups suffer from a limited basis for evidence-based decision making due to sparse environmental data available to support the technical components of infrastructure design. This dissertation draws on recent advances in remote sensing data, stochastic modeling techniques and open source platforms to bridge that information gap. Streamflow is a key environmental driver of hydropower production that is particularly challenging to model due to its stochastic nature and the complexity of the underlying natural processes. The first part of the dissertation addresses the general challenge of Predicting streamflow in Ungauged Basins (PUB). It first develops an algorithm to optimize the use of rain gauge observations to improve the accuracy of remote sensing precipitation measures. It then derives and validates a process-based model to estimate streamflow distribution in seasonally dry climates using the stochastic nature of rainfall, and proposes a novel geostatistical method to regionalize its parameters across the stream network. Although motivated by the needs of micro hydropower design in Nepal, these techniques represent contributions to the broader international challenge of PUB and can be applied worldwide. The economic drivers of rural electrification are then considered by presenting an econometric technique to estimate the cost function and demand curve of micro hydropower in Nepal. The empirical strategy uses topography-based instrumental variables to identify price elasticities. All developed methods are assembled in a computer tool, along with a search algorithm that uses a digital elevation model to optimize the placement of micro hydropower infrastructure. The tool---Micro Hydro [em]Power---is an open source application that can be accessed and operated on a web-browser (http://mfmul.shinyapps.io/mhpower). Its purpose is to assist local communities in the design and evaluation of micro hydropower alternatives in their locality, while using cost and demand information provided by local users to generate accurate feasibility maps at the national level, thus bridging the information gap.

  11. Geometrical optimization of a local ballistic magnetic sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya

    2014-04-07

    We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.

  12. Modern Radiation Therapy for Hodgkin Lymphoma: Field and Dose Guidelines From the International Lymphoma Radiation Oncology Group (ILROG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Specht, Lena, E-mail: lena.specht@regionh.dk; Yahalom, Joachim; Illidge, Tim

    2014-07-15

    Radiation therapy (RT) is the most effective single modality for local control of Hodgkin lymphoma (HL) and an important component of therapy for many patients. These guidelines have been developed to address the use of RT in HL in the modern era of combined modality treatment. The role of reduced volumes and doses is addressed, integrating modern imaging with 3-dimensional (3D) planning and advanced techniques of treatment delivery. The previously applied extended field (EF) and original involved field (IF) techniques, which treated larger volumes based on nodal stations, have now been replaced by the use of limited volumes, based solelymore » on detectable nodal (and extranodal extension) involvement at presentation, using contrast-enhanced computed tomography, positron emission tomography/computed tomography, magnetic resonance imaging, or a combination of these techniques. The International Commission on Radiation Units and Measurements concepts of gross tumor volume, clinical target volume, internal target volume, and planning target volume are used for defining the targeted volumes. Newer treatment techniques, including intensity modulated radiation therapy, breath-hold, image guided radiation therapy, and 4-dimensional imaging, should be implemented when their use is expected to decrease significantly the risk for normal tissue damage while still achieving the primary goal of local tumor control. The highly conformal involved node radiation therapy (INRT), recently introduced for patients for whom optimal imaging is available, is explained. A new concept, involved site radiation therapy (ISRT), is introduced as the standard conformal therapy for the scenario, commonly encountered, wherein optimal imaging is not available. There is increasing evidence that RT doses used in the past are higher than necessary for disease control in this era of combined modality therapy. The use of INRT and of lower doses in early-stage HL is supported by available data. Although the use of ISRT has not yet been validated in a formal study, it is more conservative than INRT, accounting for suboptimal information and appropriately designed for safe local disease control. The goal of modern smaller field radiation therapy is to reduce both treatment volume and treatment dose while maintaining efficacy and minimizing acute and late sequelae. This review is a consensus of the International Lymphoma Radiation Oncology Group (ILROG) Steering Committee regarding the modern approach to RT in the treatment of HL, outlining a new concept of ISRT in which reduced treatment volumes are planned for the effective control of involved sites of HL. Nodal and extranodal non-Hodgkin lymphomas (NHL) are covered separately by ILROG guidelines.« less

  13. Modern radiation therapy for Hodgkin lymphoma: field and dose guidelines from the international lymphoma radiation oncology group (ILROG).

    PubMed

    Specht, Lena; Yahalom, Joachim; Illidge, Tim; Berthelsen, Anne Kiil; Constine, Louis S; Eich, Hans Theodor; Girinsky, Theodore; Hoppe, Richard T; Mauch, Peter; Mikhaeel, N George; Ng, Andrea

    2014-07-15

    Radiation therapy (RT) is the most effective single modality for local control of Hodgkin lymphoma (HL) and an important component of therapy for many patients. These guidelines have been developed to address the use of RT in HL in the modern era of combined modality treatment. The role of reduced volumes and doses is addressed, integrating modern imaging with 3-dimensional (3D) planning and advanced techniques of treatment delivery. The previously applied extended field (EF) and original involved field (IF) techniques, which treated larger volumes based on nodal stations, have now been replaced by the use of limited volumes, based solely on detectable nodal (and extranodal extension) involvement at presentation, using contrast-enhanced computed tomography, positron emission tomography/computed tomography, magnetic resonance imaging, or a combination of these techniques. The International Commission on Radiation Units and Measurements concepts of gross tumor volume, clinical target volume, internal target volume, and planning target volume are used for defining the targeted volumes. Newer treatment techniques, including intensity modulated radiation therapy, breath-hold, image guided radiation therapy, and 4-dimensional imaging, should be implemented when their use is expected to decrease significantly the risk for normal tissue damage while still achieving the primary goal of local tumor control. The highly conformal involved node radiation therapy (INRT), recently introduced for patients for whom optimal imaging is available, is explained. A new concept, involved site radiation therapy (ISRT), is introduced as the standard conformal therapy for the scenario, commonly encountered, wherein optimal imaging is not available. There is increasing evidence that RT doses used in the past are higher than necessary for disease control in this era of combined modality therapy. The use of INRT and of lower doses in early-stage HL is supported by available data. Although the use of ISRT has not yet been validated in a formal study, it is more conservative than INRT, accounting for suboptimal information and appropriately designed for safe local disease control. The goal of modern smaller field radiation therapy is to reduce both treatment volume and treatment dose while maintaining efficacy and minimizing acute and late sequelae. This review is a consensus of the International Lymphoma Radiation Oncology Group (ILROG) Steering Committee regarding the modern approach to RT in the treatment of HL, outlining a new concept of ISRT in which reduced treatment volumes are planned for the effective control of involved sites of HL. Nodal and extranodal non-Hodgkin lymphomas (NHL) are covered separately by ILROG guidelines. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Advantages and limitations of navigation-based multicriteria optimization (MCO) for localized prostate cancer IMRT planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGarry, Conor K., E-mail: conor.mcgarry@belfasttrust.hscni.net; Bokrantz, Rasmus; RaySearch Laboratories, Stockholm

    2014-10-01

    Efficacy of inverse planning is becoming increasingly important for advanced radiotherapy techniques. This study’s aims were to validate multicriteria optimization (MCO) in RayStation (v2.4, RaySearch Laboratories, Sweden) against standard intensity-modulated radiation therapy (IMRT) optimization in Oncentra (v4.1, Nucletron BV, the Netherlands) and characterize dose differences due to conversion of navigated MCO plans into deliverable multileaf collimator apertures. Step-and-shoot IMRT plans were created for 10 patients with localized prostate cancer using both standard optimization and MCO. Acceptable standard IMRT plans with minimal average rectal dose were chosen for comparison with deliverable MCO plans. The trade-off was, for the MCO plans, managedmore » through a user interface that permits continuous navigation between fluence-based plans. Navigated MCO plans were made deliverable at incremental steps along a trajectory between maximal target homogeneity and maximal rectal sparing. Dosimetric differences between navigated and deliverable MCO plans were also quantified. MCO plans, chosen as acceptable under navigated and deliverable conditions resulted in similar rectal sparing compared with standard optimization (33.7 ± 1.8 Gy vs 35.5 ± 4.2 Gy, p = 0.117). The dose differences between navigated and deliverable MCO plans increased as higher priority was placed on rectal avoidance. If the best possible deliverable MCO was chosen, a significant reduction in rectal dose was observed in comparison with standard optimization (30.6 ± 1.4 Gy vs 35.5 ± 4.2 Gy, p = 0.047). Improvements were, however, to some extent, at the expense of less conformal dose distributions, which resulted in significantly higher doses to the bladder for 2 of the 3 tolerance levels. In conclusion, similar IMRT plans can be created for patients with prostate cancer using MCO compared with standard optimization. Limitations exist within MCO regarding conversion of navigated plans to deliverable apertures, particularly for plans that emphasize avoidance of critical structures. Minimizing these differences would result in better quality treatments for patients with prostate cancer who were treated with radiotherapy using MCO plans.« less

  15. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm

    PubMed Central

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID:27529247

  16. Computational and experimental studies of LEBUs at high device Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Bertelrud, Arild; Watson, R. D.

    1988-01-01

    The present paper summarizes computational and experimental studies for large-eddy breakup devices (LEBUs). LEBU optimization (using a computational approach considering compressibility, Reynolds number, and the unsteadiness of the flow) and experiments with LEBUs at high Reynolds numbers in flight are discussed. The measurements include streamwise as well as spanwise distributions of local skin friction. The unsteady flows around the LEBU devices and far downstream are characterized by strain-gage measurements on the devices and hot-wire readings downstream. Computations are made with available time-averaged and quasi-stationary techniques to find suitable device profiles with minimum drag.

  17. Variation in dielectric properties due to pathological changes in human liver.

    PubMed

    Peyman, Azadeh; Kos, Bor; Djokić, Mihajlo; Trotovšek, Blaž; Limbaeck-Stokin, Clara; Serša, Gregor; Miklavčič, Damijan

    2015-12-01

    Dielectric properties of freshly excised human liver tissues (in vitro) with several pathological conditions including cancer were obtained in frequency range 100 MHz-5 GHz. Differences in dielectric behavior of normal and pathological tissues at microwave frequencies are discussed based on histological information for each tissue. Data presented are useful for many medical applications, in particular nanosecond pulsed electroporation techniques. Knowledge of dielectric properties is vital for mathematical calculations of local electric field distribution inside electroporated tissues and can be used to optimize the process of electroporation for treatment planning procedures. © 2015 Wiley Periodicals, Inc.

  18. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  19. Fast Raman single bacteria identification: toward a routine in-vitro diagnostic

    NASA Astrophysics Data System (ADS)

    Douet, Alice; Josso, Quentin; Marchant, Adrien; Dutertre, Bertrand; Filiputti, Delphine; Novelli-Rousseau, Armelle; Espagnon, Isabelle; Kloster-Landsberg, Meike; Mallard, Frédéric; Perraut, Francois

    2016-04-01

    Timely microbiological results are essential to allow clinicians to optimize the prescribed treatment, ideally at the initial stage of the therapeutic process. Several approaches have been proposed to solve this issue and to provide the microbiological result in a few hours directly from the sample such as molecular biology. However fast and sensitive those methods are not based on single phenotypic information which presents several drawbacks and limitations. Optical methods have the advantage to allow single-cell sensitivity and to probe the phenotype of measured cells. Here we present a process and a prototype that allow automated single-bacteria phenotypic analysis. This prototype is based on the use of Digital In-line Holography techniques combined with a specially designed Raman spectrometer using a dedicated device to capture bacteria. The localization of single-cell is finely determined by using holograms and a proper propagation kernel. Holographic images are also used to analyze bacteria in the sample to sort potential pathogens from flora dwelling species or other biological particles. This accurate localization enables the use of a small confocal volume adapted to the measurement of single-cell. Along with the confocal volume adaptation, we also have modified every components of the spectrometer to optimize single-bacteria Raman measurements. This optimization allowed us to acquire informative single-cell spectra using an integration time of 0.5s only. Identification results obtained with this prototype are presented based on a 65144 Raman spectra database acquired automatically on 48 bacteria strains belonging to 8 species.

  20. Swing of the Surgical Pendulum: A Return to Surgery for Treatment of Head and Neck Cancer in the 21st Century?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holsinger, F. Christopher; Weber, Randal S.

    Treatment for head and neck cancer has evolved significantly during the past 100 years. Beginning with Bilroth's total laryngectomy on New Year's Day in 1873, 'radical' surgery remained the only accepted treatment for head and neck cancer when optimal local and regional control was the goal. Bigger was still better when it came to managing the primary tumor and the neck. The 'commando' procedure and radical neck dissection were the hallmarks of this first generation of treatments of head-and-neck cancer. With the advent of microvascular reconstructive techniques, larger and more comprehensive resections could be performed. Despite these large resections andmore » their 'mutilating' sequelae, overall survival did not improve. Even for intermediate-stage disease in head-and-neck cancer, the 5-year survival rate did not improve >50%. Many concluded that more than the scalpel was needed for optimal local and regional control, especially for intermediate- and advanced-stage disease. Most important, the multidisciplinary teams must identify and correlate biomarkers in the tumor and host that predict for a response to therapy and for optimal functional recovery. As the pendulum swings back, a scientific approach using tissue biomarkers for the response to treatment in the setting of multidisciplinary trials must emerge as the new paradigm. In the postgenomic era, treatment decisions should be made based on functional and oncologic parameters-not just to avoid perceived morbidity.« less

  1. Modeling Brain Dynamics in Brain Tumor Patients Using the Virtual Brain.

    PubMed

    Aerts, Hannelore; Schirner, Michael; Jeurissen, Ben; Van Roost, Dirk; Achten, Eric; Ritter, Petra; Marinazzo, Daniele

    2018-01-01

    Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. To this end, noninvasive neuroimaging techniques such as functional MRI and diffusion-weighted imaging fiber tracking are currently employed. However, taking into account this information is often still insufficient, as the complex nonlinear dynamics of the brain impede straightforward prediction of functional outcome after surgical intervention. Large-scale brain network modeling carries the potential to bridge this gap by integrating neuroimaging data with biophysically based models to predict collective brain dynamics. As a first step in this direction, an appropriate computational model has to be selected, after which suitable model parameter values have to be determined. To this end, we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong-Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance.

  2. In what ways do communities support optimal antiretroviral treatment in Zimbabwe?

    PubMed

    Scott, K; Campbell, C; Madanhire, C; Skovdal, M; Nyamukapa, C; Gregson, S

    2014-12-01

    Little research has been conducted on how pre-existing indigenous community resources, especially social networks, affect the success of externally imposed HIV interventions. Antiretroviral treatment (ART), an externally initiated biomedical intervention, is being rolled out across sub-Saharan Africa. Understanding the ways in which community networks are working to facilitate optimal ART access and adherence will enable policymakers to better engage with and bolster these pre-existing resources. We conducted 67 interviews and eight focus group discussions with 127 people from three key population groups in Manicaland, eastern Zimbabwe: healthcare workers, adults on ART and carers of children on ART. We also observed over 100 h of HIV treatment sites at local clinics and hospitals. Our research sought to determine how indigenous resources were enabling people to achieve optimal ART access and adherence. We analysed data transcripts using thematic network technique, coding references to supportive community networks that enable local people to achieve ART access and adherence. People on ART or carers of children on ART in Zimbabwe report drawing support from a variety of social networks that enable them to overcome many obstacles to adherence. Key support networks include: HIV groups; food and income support networks; home-based care, church and women's groups; family networks; and relationships with healthcare providers. More attention to the community context in which HIV initiatives occur will help ensure that interventions work with and benefit from pre-existing social capital. © The Author (2013). Published by Oxford University Press.

  3. In what ways do communities support optimal antiretroviral treatment in Zimbabwe?

    PubMed Central

    Scott, K.; Campbell, C.; Madanhire, C.; Skovdal, M.; Nyamukapa, C.; Gregson, S.

    2014-01-01

    Little research has been conducted on how pre-existing indigenous community resources, especially social networks, affect the success of externally imposed HIV interventions. Antiretroviral treatment (ART), an externally initiated biomedical intervention, is being rolled out across sub-Saharan Africa. Understanding the ways in which community networks are working to facilitate optimal ART access and adherence will enable policymakers to better engage with and bolster these pre-existing resources. We conducted 67 interviews and eight focus group discussions with 127 people from three key population groups in Manicaland, eastern Zimbabwe: healthcare workers, adults on ART and carers of children on ART. We also observed over 100 h of HIV treatment sites at local clinics and hospitals. Our research sought to determine how indigenous resources were enabling people to achieve optimal ART access and adherence. We analysed data transcripts using thematic network technique, coding references to supportive community networks that enable local people to achieve ART access and adherence. People on ART or carers of children on ART in Zimbabwe report drawing support from a variety of social networks that enable them to overcome many obstacles to adherence. Key support networks include: HIV groups; food and income support networks; home-based care, church and women's groups; family networks; and relationships with healthcare providers. More attention to the community context in which HIV initiatives occur will help ensure that interventions work with and benefit from pre-existing social capital. PMID:23503291

  4. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  5. Difficulty of distinguishing product states locally

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.

    2017-01-01

    Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.

  6. Markov Tracking for Agent Coordination

    NASA Technical Reports Server (NTRS)

    Washington, Richard; Lau, Sonie (Technical Monitor)

    1998-01-01

    Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.

  7. Ghost artifact cancellation using phased array processing.

    PubMed

    Kellman, P; McVeigh, E R

    2001-08-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.

  8. Ghost Artifact Cancellation Using Phased Array Processing

    PubMed Central

    Kellman, Peter; McVeigh, Elliot R.

    2007-01-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638

  9. Alien Genetic Algorithm for Exploration of Search Space

    NASA Astrophysics Data System (ADS)

    Patel, Narendra; Padhiyar, Nitin

    2010-10-01

    Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.

  10. Experimental evaluation of dynamic data allocation strategies in a distributed database with changing workloads

    NASA Technical Reports Server (NTRS)

    Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul

    1995-01-01

    Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.

  11. [Thoughts on optimizing the breast cancer screening strategies and implementation effects].

    PubMed

    Wu, K J

    2018-02-01

    Reasonable and effective breast cancer screening can make early diagnosis of breast cancer, improve the cure rate, prolong survival and improve the patients' quality of life. China has made preliminary exploration and attempt in breast cancer screening, however, there are still some problems that have not been solved in terms of the proportion of opportunistic screening, the selection of screening targets, methods and frequency, and the judgment of screening results. Therefore, this article analyzes the above problems in details, and presents some thoughts and recommendations on how to optimize the breast cancer screening strategies and implementation effects in China, from the experience of clinical practice, under the background of constantly emerging new research results and techniques and the rapid development of artificial intelligence, that is, to adjust measures to local conditions, provide personalized strategies, achieve precise screening, preach and educate, ensure health insurance coverage, improve quality control, offer technical support and employ artificial intelligence.

  12. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Borland, Michael

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  13. Distributed Transforms for Efficient Data Gathering in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Ortega, Antonio (Inventor); Shen, Godwin (Inventor); Narang, Sunil K. (Inventor); Perez-Trufero, Javier (Inventor)

    2014-01-01

    Devices, systems, and techniques for data collecting network such as wireless sensors are disclosed. A described technique includes detecting one or more remote nodes included in the wireless sensor network using a local power level that controls a radio range of the local node. The technique includes transmitting a local outdegree. The local outdegree can be based on a quantity of the one or more remote nodes. The technique includes receiving one or more remote outdegrees from the one or more remote nodes. The technique includes determining a local node type of the local node based on detecting a node type of the one or more remote nodes, using the one or more remote outdegrees, and using the local outdegree. The technique includes adjusting characteristics, including an energy usage characteristic and a data compression characteristic, of the wireless sensor network by selectively modifying the local power level and selectively changing the local node type.

  14. Multi-rendezvous low-thrust trajectory optimization using costate transforming and homotopic approach

    NASA Astrophysics Data System (ADS)

    Chen, Shiyu; Li, Haiyang; Baoyin, Hexi

    2018-06-01

    This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.

  15. Non-Destructive Evaluation of Kissing Bonds using Local Defect Resonance (LDR) Spectroscopy: A Simulation Study

    NASA Astrophysics Data System (ADS)

    Delrue, S.; Tabatabaeipour, M.; Hettler, J.; Van Den Abeele, K.

    With the growing demand from industry to optimize and further develop existing Non-Destructive Testing & Evaluation (NDT&E) techniques or new methods to detect and characterize incipient damage with high sensitivity and increased quality, ample efforts have been devoted to better understand the typical behavior of kissing bonds, such as delaminations and cracks. Recently, it has been shown experimentally that the nonlinear ultrasonic response of kissing bonds could be enhanced by using Local Defect Resonance (LDR) spectroscopy. LDR spectroscopy is an efficient NDT technique that takes advantage of the characteristic fre- quencies of the defect (defect resonances) in order to provide maximum acoustic wave-defect interaction. In fact, for nonlinear methodologies, the ultrasonic excitation of the sample should occur at either multiples or integer ratios of the characteristic defect resonance frequencies, in order to obtain the highest signal-to-noise response in the nonlinear LDR spectroscopy. In this paper, the potential of using LDR spectroscopy for the detection, localization and characterization of kissing bonds is illustrated using a 3D simulation code for elastic wave propagation in materials containing closed but dynamically active cracks or delaminations. Using the model, we are able to define an appropriate method, based on the Scaling Subtraction Method (SSM), to determine the local defect resonance frequencies of a delamination in a composite plate and to illustrate an increase in defect nonlinearity due to LDR. The simulation results will help us to obtain a better understanding of the concept of LDR and to assist in the further design and testing of LDR spectroscopy for the detection, localization and characterization of kissing bonds.

  16. Quantum annealing for combinatorial clustering

    NASA Astrophysics Data System (ADS)

    Kumar, Vaibhaw; Bass, Gideon; Tomlin, Casey; Dulny, Joseph

    2018-02-01

    Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.

  17. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  18. Denoising and segmentation of retinal layers in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Dash, Puspita; Sigappi, A. N.

    2018-04-01

    Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.

  19. Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.

  20. A Guided, Conservative Approach for the Management of Localized Mandibular Anterior Tooth Wear.

    PubMed

    Mehta, Shamir B; Francis, Selar; Banerji, Subir

    2016-03-01

    The successful management of the worn mandibular anterior dentition may present an awkward challenge to the dental operator. The purpose of this article is to describe a case report illustrating the use of a guided, three-dimensional protocol for the ultra-conservative and predictable restoration of the worn lower anterior dentition using direct resin composite. This technique utilizes information based on established biomechanical and occlusal principles to fabricate a diagnostic wax-up, which is duplicated in dental stone. This is used to prepare a vacuum-formed modified stent, assisting the clinician to place directly bonded resin composite restorations to restore the worn lower anterior dentition. The technique, described in 2012 and referred to as 'injection moulding' has the potential to offer optimal form, function and an aesthetic outcome in an efficient manner. CPD/Clinical Relevance: This article aims to describe an alternative technique to simplify the processes involved with restoration of worn lower anterior teeth.

  1. Structured back gates for high-mobility two-dimensional electron systems using oxygen ion implantation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berl, M., E-mail: mberl@phys.ethz.ch; Tiemann, L.; Dietsche, W.

    2016-03-28

    We present a reliable method to obtain patterned back gates compatible with high mobility molecular beam epitaxy via local oxygen ion implantation that suppresses the conductivity of an 80 nm thick silicon doped GaAs epilayer. Our technique was optimized to circumvent several constraints of other gating and implantation methods. The ion-implanted surface remains atomically flat which allows unperturbed epitaxial overgrowth. We demonstrate the practical application of this gating technique by using magneto-transport spectroscopy on a two-dimensional electron system (2DES) with a mobility exceeding 20 × 10{sup 6} cm{sup 2}/V s. The back gate was spatially separated from the Ohmic contacts of the 2DES,more » thus minimizing the probability for electrical shorts or leakage and permitting simple contacting schemes.« less

  2. Dynamic two-photon imaging of the immune response to Toxoplasma gondii infection.

    PubMed

    Luu, L; Coombes, J L

    2015-03-01

    Toxoplasma gondii is a highly successful parasite that can manipulate host immune responses to optimize its persistence and spread. As a result, a highly complex relationship exists between T. gondii and the immune system of the host. Advances in imaging techniques, and in particular, the application of two-photon microscopy to mouse infection models, have made it possible to directly visualize interactions between parasites and the host immune system as they occur in living tissues. Here, we will discuss how dynamic imaging techniques have provided unexpected new insight into (i) how immune responses are dynamically regulated by cells and structures in the local tissue environment, (ii) how protective responses to T. gondii are generated and (iii) how the parasite exploits the immune system for its own benefit. © 2014 John Wiley & Sons Ltd.

  3. External beam techniques to boost cervical cancer when brachytherapy is not an option—theories and applications

    PubMed Central

    Kilic, Sarah; Khan, Atif J.; Beriwal, Sushil; Small, William

    2017-01-01

    The management of locally advanced cervical cancer relies on brachytherapy (BT) as an integral part of the radiotherapy delivery armamentarium. Occasionally, intracavitary BT is neither possible nor available. In these circumstances, post-external beam radiotherapy (EBRT) interstitial brachytherapy and/or hysterectomy may represent viable options that must be adequately executed in a timely manner. However, if these options are not applicable due to patient related or facility related reasons, a formal contingency plan should be in place. Innovative EBRT techniques such as intensity modulated and stereotactic radiotherapy may be considered for patients unable to undergo brachytherapy. Relying on provocative arguments and recent data, this review explores the rationale for and limitations of non-brachytherapy substitutes in that setting aiming to establish a formal process for the optimal execution of this alternative plan. PMID:28603722

  4. A study of the stress wave factor technique for the characterization of composite materials

    NASA Technical Reports Server (NTRS)

    Henneke, E. G., II; Duke, J. C., Jr.; Stinchcomb, W. W.; Govada, A.; Lemascon, A.

    1983-01-01

    A testing program was undertaken to provide an independent investigation and evaluation of the stress wave factor for characterizing the mechanical behavior of composite laminates. Some of the data which was obtained after performing a very large number of tests to determine the reproducibility of the SWF measurement is presented. It was determined that, with some optimizing of experimental parameters, the SWF value can be reproduced to within + or - 10%. Results are also given which show that, after careful calibration procedures, the lowest SWF value along the length of a specimen will correlate very closely to the site of final failure when the specimen is loaded in tension. Finally, using a moire interferometry technique, it was found that local regions having the highest in plane strains under tensile loading also had the lowest SWF values.

  5. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  6. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    PubMed

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors

    NASA Astrophysics Data System (ADS)

    Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.

    2016-05-01

    This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.

  8. Optimal control theory with continuously distributed target states: An application to NaK

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; May, Volkhard

    2006-01-01

    Laser pulse control of molecular dynamics is studied theoretically by using optimal control theory. The control theory is extended to target states which are distributed in time as well as in a space of parameters which are responsible for a change of individual molecular properties. This generalized treatment of a control task is first applied to wave packet formation in randomly oriented diatomic systems. Concentrating on an ensemble of NaK molecules which are not aligned the control yield decreases drastically when compared with an aligned ensemble. Second, we demonstrate for NaK the maximization of the probe pulse transient absorption in a pump-probe scheme with an optimized pump pulse. These computations suggest an overall optical control scheme, whereby a flexible technique is suggested to form particular wave packets in the excited state potential energy surface. In particular, it is shown that considerable wave packet localization at the turning points of the first-excited Σ-state potential energy surfaces of NaK may be achieved. The dependency of the control yield on the probe pulse parameters is also discussed.

  9. Optimally Distributed Kalman Filtering with Data-Driven Communication †

    PubMed Central

    Dormann, Katharina

    2018-01-01

    For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392

  10. The optimal structure-conductivity relation in epoxy-phthalocyanine nanocomposites.

    PubMed

    Huijbregts, L J; Brom, H B; Brokken-Zijp, J C M; Kemerink, M; Chen, Z; Goeje, M P de; Yuan, M; Michels, M A J

    2006-11-23

    Phthalcon-11 (aquocyanophthalocyaninatocobalt (III)) forms semiconducting nanocrystals that can be dispersed in epoxy coatings to obtain a semiconducting material with a low percolation threshold. We investigated the structure-conductivity relation in this composite and the deviation from its optimal realization by combining two techniques. The real parts of the electrical conductivity of a Phthalcon-11/epoxy coating and of Phthalcon-11 powder were measured by dielectric spectroscopy as a function of frequency and temperature. Conducting atomic force microscopy (C-AFM) was applied to quantify the conductivity through the coating locally along the surface. This combination gives an excellent tool to visualize the particle network. We found that a large fraction of the crystals is organized in conducting channels of fractal building blocks. In this picture, a low percolation threshold automatically leads to a conductivity that is much lower than that of the filler. Since the structure-conductivity relation for the found network is almost optimal, a drastic increase in the conductivity of the coating cannot be achieved by changing the particle network, but only by using a filler with a higher conductivity level.

  11. Gradient optimization of finite projected entangled pair states

    NASA Astrophysics Data System (ADS)

    Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin

    2017-05-01

    Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.

  12. An adjoint method for gradient-based optimization of stellarator coil shapes

    NASA Astrophysics Data System (ADS)

    Paul, E. J.; Landreman, M.; Bader, A.; Dorland, W.

    2018-07-01

    We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL (Landreman 2017 Nucl. Fusion 57 046003) approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computation of the local sensitivity of figures of merit to normal displacements of the winding surface is presented, with potential applications for understanding engineering tolerances.

  13. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  14. [Evaluation and results of ablative therapies in prostate cancer].

    PubMed

    Renard-Penna, R; Sanchez-Salas, R; Barret, E; Cosset, J M; de Vergie, S; Sapetti, J; Ingels, A; Gangi, A; Lang, H; Cathelineau, X

    2017-11-01

    To perform a state of the art about methods of evaluation and present results in ablative therapies for localized prostate cancer. A review of the scientific literature was performed in Medline database (http://www.ncbi.nlm.nih.gov) and Embase (http://www.embase.com) using different associations of keywords. Publications obtained were selected based on methodology, language and relevance. After selection, 102 articles were analysed. Analyse the results of ablative therapies is presently difficult considering the heterogeneity of indications, techniques and follow-up. However, results from the most recent and homogeneous studies are encouraging. Oncologically, postoperative biopsies (the most important criteria) are negative (without any tumor cells in the treated area) in 75 to 95%. Functionally, urinary and sexual pre-operative status is spared (or recovered early) in more than 90% of the patients treated. More and more studies underline also the correlation between the results and the technique used considering the volume of the gland and, moreover, the "index lesion" localization. The post-treatment pathological evaluation by biopsies (targeted with MRI or, perhaps in a near future, with innovative ultrasonography) is the corner stone of oncological evaluation of ablative therapies. Ongoing trials will allow to standardize the follow-up and determine the best indication and the best techniques in order to optimize oncological and functional results for each patient treated. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  15. Evaluation of structure-reactivity descriptors and biological activity spectra of 4-(6-methoxy-2-naphthyl)-2-butanone using spectroscopic techniques

    NASA Astrophysics Data System (ADS)

    Agrawal, Megha; Deval, Vipin; Gupta, Archana; Sangala, Bagvanth Reddy; Prabhu, S. S.

    2016-10-01

    The structure and several spectroscopic features along with reactivity parameters of the compound 4-(6-methoxy-2-naphthyl)-2-butanone (Nabumetone) have been studied using experimental techniques and tools derived from quantum chemical calculations. Structure optimization is followed by force field calculations based on density functional theory (DFT) at the B3LYP/6-311++G(d,p) level of theory. The vibrational spectra have been interpreted with the aid of normal coordinate analysis. UV-visible spectrum and the effect of solvent have been discussed. The electronic properties such as HOMO and LUMO energies have been determined by TD-DFT approach. In order to understand various aspects of pharmacological sciences several new chemical reactivity descriptors - chemical potential, global hardness and electrophilicity have been evaluated. Local reactivity descriptors - Fukui functions and local softnesses have also been calculated to find out the reactive sites within molecule. Aqueous solubility and lipophilicity have been calculated which are crucial for estimating transport properties of organic molecules in drug development. Estimation of biological effects, toxic/side effects has been made on the basis of prediction of activity spectra for substances (PASS) prediction results and their analysis by Pharma Expert software. Using the THz-TDS technique, the frequency-dependent absorptions of NBM have been measured in the frequency range up to 3 THz.

  16. Sequential Super-Resolution Imaging of Bacterial Regulatory Proteins: The Nucleoid and the Cell Membrane in Single, Fixed E. coli Cells.

    PubMed

    Spahn, Christoph; Glaesmann, Mathilda; Gao, Yunfeng; Foo, Yong Hwee; Lampe, Marko; Kenney, Linda J; Heilemann, Mike

    2017-01-01

    Despite their small size and the lack of compartmentalization, bacteria exhibit a striking degree of cellular organization, both in time and space. During the last decade, a group of new microscopy techniques emerged, termed super-resolution microscopy or nanoscopy, which facilitate visualizing the organization of proteins in bacteria at the nanoscale. Single-molecule localization microscopy (SMLM) is especially well suited to reveal a wide range of new information regarding protein organization, interaction, and dynamics in single bacterial cells. Recent developments in click chemistry facilitate the visualization of bacterial chromatin with a resolution of ~20 nm, providing valuable information about the ultrastructure of bacterial nucleoids, especially at short generation times. In this chapter, we describe a simple-to-realize protocol that allows determining precise structural information of bacterial nucleoids in fixed cells, using direct stochastic optical reconstruction microscopy (dSTORM). In combination with quantitative photoactivated localization microscopy (PALM), the spatial relationship of proteins with the bacterial chromosome can be studied. The position of a protein of interest with respect to the nucleoids and the cell cylinder can be visualized by super-resolving the membrane using point accumulation for imaging in nanoscale topography (PAINT). The combination of the different SMLM techniques in a sequential workflow maximizes the information that can be extracted from single cells, while maintaining optimal imaging conditions for each technique.

  17. Electrode-shaping for the excitation and detection of permitted arbitrary modes in arbitrary geometries in piezoelectric resonators.

    PubMed

    Pulskamp, Jeffrey S; Bedair, Sarah S; Polcawich, Ronald G; Smith, Gabriel L; Martin, Joel; Power, Brian; Bhave, Sunil A

    2012-05-01

    This paper reports theoretical analysis and experimental results on a numerical electrode shaping design technique that permits the excitation of arbitrary modes in arbitrary geometries for piezoelectric resonators, for those modes permitted to exist by the nonzero piezoelectric coefficients and electrode configuration. The technique directly determines optimal electrode shapes by assessing the local suitability of excitation and detection electrode placement on two-port resonators without the need for iterative numerical techniques. The technique is demonstrated in 61 different electrode designs in lead zirconate titanate (PZT) thin film on silicon RF micro electro-mechanical system (MEMS) plate, beam, ring, and disc resonators for out-of-plane flexural and various contour modes up to 200 MHz. The average squared effective electromechanical coupling factor for the designs was 0.54%, approximately equivalent to the theoretical maximum value of 0.53% for a fully electroded length-extensional mode beam resonator comprised of the same composite. The average improvement in S(21) for the electrode-shaped designs was 14.6 dB with a maximum improvement of 44.3 dB. Through this piezoelectric electrodeshaping technique, 95% of the designs showed a reduction in insertion loss.

  18. LOCSET Phase Locking: Operation, Diagnostics, and Applications

    NASA Astrophysics Data System (ADS)

    Pulford, Benjamin N.

    The aim of this dissertation is to discuss the theoretical and experimental work recently done with the Locking of Optical Coherence via Single-detector Electronic-frequency Tagging (LOCSET) phase locking technique developed and employed here are AFRL. The primary objectives of this effort are to detail the fundamental operation of the LOCSET phase locking technique, recognize the conditions in which the LOCSET control electronics optimally operate, demonstrate LOCSET phase locking with higher channel counts than ever before, and extend the LOCSET technique to correct for low order, atmospherically induced, phase aberrations introduced to the output of a tiled array of coherently combinable beams. The experimental work performed for this effort resulted in the coherent combination of 32 low power optical beams operating with unprecedented LOCSET phase error performance of lambda/71 RMS in a local loop beam combination configuration. The LOCSET phase locking technique was also successfully extended, for the first time, into an Object In the Loop (OIL) configuration by utilizing light scattered off of a remote object as the optical return signal for the LOCSET phase control electronics. Said LOCSET-OIL technique is capable of correcting for low order phase aberrations caused by atmospheric turbulence disturbances applied across a tiled array output.

  19. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  20. A simple method for EEG guided transcranial electrical stimulation without models.

    PubMed

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

Top